Transportation

Air India Boeing 787 Carrying 242 Passengers Crashes After Takeoff (msn.com) 159

Flying to London, a Boeing 787 aircraft operated by Air India "crashed shortly after taking off..." reports Bloomberg, "in what stands to be the worst accident involving the U.S. planemaker's most advanced widebody airliner." Flight AI171 was carrying 242 passengers and crew. Video footage shared on social media showed a giant plume of smoke engulfing the crash site, with no reports of survivors. [UPDATE: Reuters reports one passenger jumped out of the emergency exit and survived, with a senior police officer saying "chances are that there might be more survivors among the injured who are being treated in the hospital."]

The aircraft entered a slow descent shortly after taking off, with its landing gear still extended before exploding into a huge fireball upon impact. The crash took place in a residential area, which could mean a higher death toll... The pilots in command issued a mayday call immediately after take-off to air traffic controllers, according to India's civil aviation regulator.

Advertising

Amazon Is About To Be Flooded With AI-Generated Video Ads 30

Amazon has launched its AI-powered Video Generator tool in the U.S., allowing sellers to quickly create photorealistic, motion-enhanced video ads often with a single click. "We'll likely see Amazon retailers utilizing AI-generated video ads in the wild now that the tool is generally available in the U.S. and costs nothing to use -- unless the ads are so convincing that we don't notice anything at all," says The Verge. From the report: New capabilities include motion improvements to show items in action, which Amazon says is best for showcasing products like toys, tools, and worn accessories. For example, Video Generator can now create clips that show someone wearing a watch on their wrist and checking the time, instead of simply displaying the watch on a table. The tool generates six different videos to choose from, and allows brands to add their logos to the finished results.

The Video Generator can now also make ads with multiple connected scenes that include humans, pets, text overlays, and background music. The editing timeline shown in Amazon's announcement video suggests the ads max out at 21 seconds.. The resulting ads edge closer to the traditional commercials we're used to seeing while watching TV or online content, compared to raw clips generated by video AI tools like OpenAI's Sora or Adobe Firefly.

A new video summarization feature can create condensed video ads from existing footage, such as demos, tutorials, and social media content. Amazon says Video Generator will automatically identify and extract key clips to generate new videos formatted for ad campaigns. A one-click image-to-video feature is also available that creates shorter GIF-style clips to show products in action.
Social Networks

Bluesky's Decline Stems From Never Hearing From the Other Side (washingtonpost.com) 183

Bluesky's user engagement has fallen roughly 50% since peaking in mid-November, according to a recent Pew Research Center analysis, as progressive groups' efforts to migrate users from Elon Musk's X platform show signs of failure. The research found that while many news influencers maintain Bluesky accounts, two-thirds post irregularly compared to more than 80% who still post daily to X. A Washington Post columnist tries to make sense of it: The people who have migrated to Bluesky tend to be those who feel the most visceral disgust for Musk and Trump, plus a smattering of those who are merely curious and another smattering who are tired of the AI slop and unregenerate racism that increasingly pollutes their X feeds. Because the Musk and Trump haters are the largest and most passionate group, the result is something of an echo chamber where it's hard to get positive engagement unless you're saying things progressives want to hear -- and where the negative engagement on things they don't want to hear can be intense. That's true even for content that isn't obviously political: Ethan Mollick, a professor at the University of Pennsylvania's Wharton School who studies AI, recently announced that he'll be limiting his Bluesky posting because AI discussions on the platform are too "fraught."

All this is pretty off-putting for folks who aren't already rather progressive, and that creates a threefold problem for the ones who dream of getting the old band back together. Most obviously, it makes it hard for the platform to build a large enough userbase for the company to become financially self-sustaining, or for liberals to amass the influence they wielded on old Twitter. There, they accumulated power by shaping the contours of a conversation that included a lot of non-progressives. On Bluesky, they're mostly talking among themselves.

AI

China Shuts Down AI Tools During Nationwide College Exams 27

According to Bloomberg, several major Chinese AI companies, including Alibaba, ByteDance, and Tencent, have temporarily disabled certain chatbot features during the gaokao college entrance exams to prevent cheating. "Popular AI apps, including Alibaba's Qwen and ByteDance's Doubao, have stopped picture recognition features from responding to questions about test papers, while Tencent's Yuanbao, Moonshot's Kimi have suspended photo-recognition services entirely during exam hours," adds The Verge. From the report: The rigorous multi-day "gaokao" exams are sat by more than 13.3 million Chinese students between June 7-10th, each fighting to secure one of the limited spots at universities across the country. Students are already banned from using devices like phones and laptops during the hours-long tests, so the disabling of AI chatbots serves as an additional safety net to prevent cheating during exam season.

When asked to explain the suspension, Bloomberg reports the Yuanbao and Kimi chatbots responded that functions had been disabled "to ensure the fairness of the college entrance examinations." Similarly, the DeepSeek AI tool that went viral earlier this year is also blocking its service during specific hours "to ensure fairness in the college entrance examination,"according to The Guardian.
The Guardian notes that the news is being driven by students on the Chinese social media platform Weibo. "The gaokao entrance exam incites fierce competition as it's the only means to secure a college placement in China, driving concerns that students may try to improve their chances with AI tools," notes The Verge.
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

Government

Russian Spies Are Analyzing Data From China's WeChat App (nytimes.com) 17

An anonymous reader shared this report from The New York Times: Russian counterintelligence agents are analyzing data from the popular Chinese messaging and social media app WeChat to monitor people who might be in contact with Chinese spies, according to a Russian intelligence document obtained by The New York Times. The disclosure highlights the rising level of concern about Chinese influence in Russia as the two countries deepen their relationship. As Russia has become isolated from the West over its war in Ukraine, it has become increasingly reliant on Chinese money, companies and technology. But it has also faced what the document describes as increased Chinese espionage efforts.

The document indicates that the Russian domestic security agency, known as the F.S.B., pulls purloined data into an analytical tool known as "Skopishche" (a Russian word for a mob of people). Information from WeChat is among the data being analyzed, according to the document... One Western intelligence agency told The Times that the information in the document was consistent with what it knew about "Russian penetration of Chinese communications...." By design, [WeChat] does not use end-to-end encryption to protect user data. That is because the Chinese government exercises strict control over the app and relies on its weak security to monitor and censor speech. Foreign intelligence agencies can exploit that weakness, too...

WeChat was briefly banned in Russia in 2017, but access was restored after Tencent took steps to comply with laws requiring foreign digital platforms above a certain size to register as "organizers of information dissemination." The Times confirmed that WeChat is currently licensed by the government to operate in Russia. That license would require Tencent to store user data on Russian servers and to provide access to security agencies upon request.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

Crime

Cambridge Mapping Project Solves a Medieval Murder (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: In 2019, we told you about a new interactive digital "murder map" of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners' rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases. It's easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman's hired assassins armed with daggers in the streets of Cheapside near St. Paul's Cathedral. And that's just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. "We are looking at a murder commissioned by a leading figure of the English aristocracy," said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. "It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive." Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners' rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury's verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.
The full historical context, analytical depth, and social commentary can be read in the the paper.

Interestingly, Eisner "extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions," writes Ars' Jennifer Ouellette. Most murders often occurred in public places, usually on weekends, with knives and swords as primary weapons. Oxford had a significantly elevated violence rate compared to London and York, "suggestive of high levels of social disorganization and impunity."

London, meanwhile, showed distinct clusters of homicides, "which reflect differences in economic and social functions," the authors wrote. "In all three cities, some homicides were committed in spaces of high visibility and symbolic significance."
China

China Will Drop the Great Firewall For Some Users To Boost Free-Trade Port Ambitions (scmp.com) 49

China's southernmost province of Hainan is piloting a programme to grant select corporate users broad access to the global internet, a rare move in a country known for having some of the world's most restrictive online censorship, as the island seeks to transform itself into a global free-trade port. From a report: Employees of companies registered and operating in Hainan can apply for the "Global Connect" mobile service through the Hainan International Data Comprehensive Service Centre (HIDCSC), according to the agency, which is overseen by the state-run Hainan Big Data Development Centre.

The programme allows eligible users to bypass the so-called Great Firewall, which blocks access to many of the world's most-visited websites, such as Google and Wikipedia. Applicants must be on a 5G plan with one of the country's three major state-backed carriers -- China Mobile, China Unicom or China Telecom -- and submit their employer's information, including the company's Unified Social Credit Code, for approval. The process can take up to five months, HIDCSC staff said.

China

OpenAI Says Significant Number of Recent ChatGPT Misuses Likely Came From China (wsj.com) 19

OpenAI said it disrupted several attempts [non-paywalled source] from users in China to leverage its AI models for cyber threats and covert influence operations, underscoring the security challenges AI poses as the technology becomes more powerful. From a report: The Microsoft-backed company on Thursday published its latest report on disrupting malicious uses of AI, saying its investigative teams continued to uncover and prevent such activities in the three months since Feb. 21.

While misuse occurred in several countries, OpenAI said it believes a "significant number" of violations came from China, noting that four of 10 sample cases included in its latest report likely had a Chinese origin. In one such case, the company said it banned ChatGPT accounts it claimed were using OpenAI's models to generate social media posts for a covert influence operation. The company said a user stated in a prompt that they worked for China's propaganda department, though it cautioned it didn't have independent proof to verify its claim.

Desktops (Apple)

Endangered Classic Mac Plastic Color Returns As 3D-Printer Filament (arstechnica.com) 53

An anonymous reader quotes a report from Ars Technica: On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic "Platinum" color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers. Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.

The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the "original" color can be a somewhat challenging and subjective experience.
Strosnider said he paid approximately $900 to develop the color. "Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available [for $21.99 per kilogram]," adds Ars.
Privacy

Apple Gave Governments Data On Thousands of Push Notifications (404media.co) 13

An anonymous reader quotes a report from 404 Media: Apple provided governments around the world with data related to thousands of push notifications sent to its devices, which can identify a target's specific device or in some cases include unencrypted content like the actual text displayed in the notification, according to data published by Apple. In one case, that Apple did not ultimately provide data for, Israel demanded data related to nearly 700 push notifications as part of a single request. The data for the first time puts a concrete figure on how many requests governments around the world are making, and sometimes receiving, for push notification data from Apple.

The practice first came to light in 2023 when Senator Ron Wyden sent a letter to the U.S. Department of Justice revealing the practice, which also applied to Google. As the letter said, "the data these two companies receive includes metadata, detailing which app received a notification and when, as well as the phone and associated Apple or Google account to which that notification was intended to be delivered. In certain instances, they also might also receive unencrypted content, which could range from backend directives for the app to the actual text displayed to a user in an app notification." The published data relates to blocks of six month periods, starting in July 2022 to June 2024. Andre Meister from German media outlet Netzpolitik posted a link to the transparency data to Mastodon on Tuesday.
Along with the data Apple published the following description: "Push Token requests are based on an Apple Push Notification service token identifier. When users allow a currently installed application to receive notifications, a push token is generated and registered to that developer and device. Push Token requests generally seek identifying details of the Apple Account associated with the device's push token, such as name, physical address and email address."
The Courts

Reddit Sues AI Startup Anthropic For Breach of Contract, 'Unfair Competition' (cnbc.com) 44

Reddit is suing AI startup Anthropic for what it's calling a breach of contract and for engaging in "unlawful and unfair business acts" by using the social media company's platform and data without authority. From a report: The lawsuit, filed in San Francisco on Wednesday, claims that Anthropic has been training its models on the personal data of Reddit users without obtaining their consent. Reddit alleges that it has been harmed by the unauthorized commercial use of its content.

The company opened the complaint by calling Anthropic a "late-blooming" AI company that "bills itself as the white knight of the AI industry." Reddit follows by saying, "It is anything but."

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
Businesses

Going To an Office and Pretending To Work: A Business That's Booming in China (elpais.com) 88

A new business model has emerged across China's major cities, El Pais reports, where companies charge unemployed individuals to rent desk space and pretend to work, responding to social pressure around joblessness amid rising youth unemployment rates. These services charge between 30 and 50 yuan ($4-7) daily for desks, Wi-Fi, coffee, and lunch in spaces designed to mimic traditional work environments.

Some operations assign fictitious tasks and organize supervisory rounds to enhance the illusion, while premium services allow clients to roleplay as managers or stage workplace conflicts for additional fees. The trend has gained significant traction on Xiaohongshu, China's equivalent to Instagram, where advertisements for "pretend-to-work companies" accumulate millions of views. Youth unemployment reached 16.5% among 16-to-24-year-olds in March 2025, according to National Bureau of Statistics data, while overall urban unemployment stood at 5.3% in the first quarter.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Space

Six More Humans Successfully Carried to the Edge of Space by Blue Origin (space.com) 74

An anonymous reader shared this report from Space.com: Three world travelers, two Space Camp alums and an aerospace executive whose last name aptly matched their shared adventure traveled into space and back Saturday, becoming the latest six people to fly with Blue Origin, the spaceflight company founded by billionaire Jeff Bezos.

Mark Rocket joined Jaime Alemán, Jesse Williams, Paul Jeris, Gretchen Green and Amy Medina Jorge on board the RSS First Step — Blue Origin's first of two human-rated New Shepard capsules — for a trip above the Kármán Line, the 62-mile-high (100-kilometer) internationally recognized boundary between Earth and space...

Mark Rocket became the first New Zealander to reach space on the mission. His connection to aerospace goes beyond his apt name and today's flight; he's currently the CEO of Kea Aerospace and previously helped lead Rocket Lab, a competing space launch company to Blue Origin that sends most of its rockets up from New Zealand. Alemán, Williams and Jeris each traveled the world extensively before briefly leaving the planet today. An attorney from Panama, Alemán is now the first person to have visited all 193 countries recognized by the United Nations, traveled to the North and South Poles, and now, have been into space. For Williams, an entrepreneur from Canada, Saturday's flight continued his record of achieving high altitudes; he has summitted Mt. Everest and five of the other six other highest mountains across the globe.

"For about three minutes, the six NS-32 crewmates experienced weightlessness," the article points out, "and had an astronaut's-eye view of the planet..."

On social media Blue Origin notes it's their 12th human spaceflight, "and the 32nd flight of the New Shepard program."
AI

Is the AI Job Apocalypse Already Here for Some Recent Grads? (msn.com) 117

"This month, millions of young people will graduate from college," reports the New York Times, "and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence." That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had "deteriorated noticeably." Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. "There are signs that entry-level positions are being displaced by artificial intelligence at higher rates," the firm wrote in a recent report.

But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build "virtual workers" that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become "AI-first," testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company...

"This is something I'm hearing about left and right," said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. "Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'" Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasizing about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough...

AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,

Slashdot Top Deals