×
Movies

Netflix Doc Accused of Using AI To Manipulate True Crime Story

Earlier this week, Netflix found itself embroiled in an AI scandal when Futurism spotted AI-generated images used in the Netflix documentary What Jennifer Did.. The movie's credits do not mention any uses of AI, causing critics to call out the filmmakers for "potentially embellishing a movie that's supposed to be based on real-life events," reports Ars Technica. An executive producer of the Netflix hit acknowledged that some of the photos were edited to protect the identity of the source but remained vague about whether AI was used in the process. From the report: What Jennifer Did shot to the top spot in Netflix's global top 10 when it debuted in early April, attracting swarms of true crime fans who wanted to know more about why Pan paid hitmen $10,000 to murder her parents. But quickly the documentary became a source of controversy, as fans started noticing glaring flaws in images used in the movie, from weirdly mismatched earrings to her nose appearing to lack nostrils, the Daily Mail reported, in a post showing a plethora of examples of images from the film. [...]

Jeremy Grimaldi -- who is also the crime reporter who wrote a book on the case and provided the documentary with research and police footage -- told the Toronto Star that the images were not AI-generated. Grimaldi confirmed that all images of Pan used in the movie were real photos. He said that some of the images were edited, though, not to blur the lines between truth and fiction, but to protect the identity of the source of the images. "Any filmmaker will use different tools, like Photoshop, in films," Grimaldi told The Star. "The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source." While Grimaldi's comments provide some assurance that the photos are edited versions of real photos of Pan, they are also vague enough to obscure whether AI was among the "different tools" used to edit the photos.
AI

Linus Torvalds on 'Hilarious' AI Hype (zdnet.com) 20

Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements."

That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently.

Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

AI

Microsoft's VASA-1 Can Deepfake a Person With One Photo and One Audio Track (arstechnica.com) 11

Microsoft Research Asia earlier this week unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. ArsTechnica: In the future, it could power virtual avatars that render locally and don't require video feeds -- or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want. "It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors," reads the abstract of the accompanying research paper titled, "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." It's the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo.

The VASA framework (short for "Visual Affective Skills Animator") uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.

Facebook

Meta's Not Telling Where It Got Its AI Training Data (slashdot.org) 20

An anonymous reader shares a report: Today Meta unleashed its ChatGPT competitor, Meta AI, across its apps and as a standalone. The company boasts that it is running on its latest, greatest AI model, Llama 3, which was trained on "data of the highest quality"! A dataset seven times larger than Llama2! And includes 4 times more code! What is that training data? There the company is less loquacious.

Meta said the 15 trillion tokens on which its trained came from "publicly available sources." Which sources? Meta told The Verge that it didn't include Meta user data, but didn't give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: "we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3." There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it's liable to spit out a more concentrated version of any garbage it is ingesting.

Google

Google To Employees: 'We Are a Workplace' 198

Google, once known for its unconventional approach to business, has taken a decisive step towards becoming a more traditional company by firing 28 employees who participated in protests against a $1.2 billion contract with the Israeli government. The move comes after sit-in demonstrations on Tuesday at Google offices in Silicon Valley and New York City, where employees opposed the company's support for Project Nimbus, a cloud computing contract they argue harms Palestinians in Gaza. Nine employees were arrested during the protests.

In a note to employees, CEO Sundar Pichai said, "We have a culture of vibrant, open discussion... But ultimately we are a workplace and our policies and expectations are clear: this is a business, and not a place to act in a way that disrupts coworkers or makes them feel unsafe, to attempt to use the company as a personal platform, or to fight over disruptive issues or debate politics."

Google also says that the Project Nimbus contract is "not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services."

Axios adds: Google prided itself from its early days on creating a university-like atmosphere for the elite engineers it hired. Dissent was encouraged in the belief that open discourse fostered innovation. "A lot of Google is organized around the fact that people still think they're in college when they work here," then-CEO Eric Schmidt told "In the Plex" author Steven Levy in the 2000s.

What worked for an organization with a few thousand employees is harder to maintain among nearly 200,000 workers. Generational shifts in political and social expectations also mean that Google's leadership and its rank-and-file aren't always aligned.
The Internet

Reddit Is Taking Over Google (businessinsider.com) 84

An anonymous reader quotes a report from Business Insider: If you think you've been seeing an awful lot more Reddit results lately when you search on Google, you're not imagining things. The internet is in upheaval, and for website owners the rules of "winning" Google Search have never been murkier. Google's generative AI search engine is coming from one direction. It's creeping closer to mainstream deployment and bringing an existential crisis for SEOs and website makers everywhere. Coming from the other direction is an influx of posts from Reddit, Quora, and other internet forums that have climbed up through the traditional set of Google links. Data analysis from Semrush, which predicts traffic based on search ranking, shows that traffic to Reddit has climbed at an impressive clip since August. Semrush estimated that Reddit had over 132 million visitors in August 2023. At the time of publishing, it was projected to have over 346 million visitors in April 2024.

None of this is accidental. For years, Google has been watching users tack on "Reddit" to the end of search queries and finally decided to do something about it. Google started dropping hints in 2022 when it promised to do a better job of promoting sites that weren't just chasing the top of search but were more helpful and human. Last August, Google rolled out a big update to Search that seemed to kick this into action. Reddit, Quora, and other forum sites started getting more visibility in Google, both within the traditional links and within a new "discussions and forums" section, which you may have spotted if you're US-based. The timing of this Reddit bump has led to some conspiracy theories. In February, Google and Reddit announced a blockbuster deal that would let Google train its AI models on Reddit content. Google said the deal, reportedly worth $60 million, would "facilitate more content-forward displays of Reddit information," leading to some speculation that Google promised Reddit better visibility in exchange for the valuable training data. A few weeks later, Reddit also went public.

Steve Paine, marketing manager at Sistrix, called the rise of Reddit "unprecedented." "There hasn't been a website that's grown so much search visibility so quickly in the US in at least the last five years," he told Business Insider. Right now, Reddit ranks high for product searches. Reddit's main competitors are Wikipedia, YouTube, and Fandom, Paine said, and it also competes in "high-value commercial searches," putting it up against Amazon. The "real competitors," he said, are the subreddits that compete with brands on the web.
A Google spokesperson told Business Insider that the company is essentially just giving users what they want: "Our research has shown that people often want to learn from others' experiences with a topic, so we've continued to make it easier to find helpful perspectives on Search when it's relevant to a query. Our systems surface content from hundreds of forums and other communities across the web, and we conduct rigorous testing to ensure results are helpful and high quality."
AI

Meta Is Adding Real-Time AI Image Generation To WhatsApp 12

WhatsApp users in the U.S. will soon see support for real-time AI image generation. The Verge reports: As soon as you start typing a text-to-image prompt in a chat with Meta AI, you'll see how the image changes as you add more detail about what you want to create. In the example shared by Meta, a user types in the prompt, "Imagine a soccer game on mars." The generated image quickly changes from a typical soccer player to showing an entire soccer field on a Martian landscape. If you have access to the beta, you can try out the feature for yourself by opening a chat with Meta AI and then start a prompt with the word "Imagine."

Additionally, Meta says its Meta Llama 3 model can now produce "sharper and higher quality" images and is better at showing text. You can also ask Meta AI to animate any images you provide, allowing you to turn them into a GIF to share with friends. Along with availability on WhatsApp, real-time image generation is also available to US users through Meta AI for the web.
Further reading: Meta Releases Llama 3 AI Models, Claiming Top Performance
AI

Author Granted Copyright Over Book With AI-Generated Text - With a Twist 41

The U.S. Copyright Office has granted a copyright registration to Elisa Shupe, a retired U.S. Army veteran, for her novel "AI Machinations: Tangled Webs and Typed Words," which extensively used OpenAI's ChatGPT in its creation. The registration is among the first for creative works incorporating AI-generated text, but with a significant caveat - Shupe is considered the author of the "selection, coordination, and arrangement" of the AI-generated content, not the text itself.

Shupe, who writes under the pen name Ellen Rae, initially filed for copyright in October 2022, seeking an Americans with Disabilities Act (ADA) exemption due to her cognitive impairments. The Copyright Office rejected her application but later granted the limited copyright after Shupe appealed. The decision, as Wired points out, highlights the agency's struggle to define authorship in the age of AI and the nuances of copyright protection for AI-assisted works.
Facebook

Meta Releases Llama 3 AI Models, Claiming Top Performance 19

Meta debuted a new version of its powerful Llama AI model, its latest effort to keep pace with similar technology from companies like OpenAI, X and Google. The company describes Llama 3 8B and Llama 3 70B, containing 8 billion and 70 billion parameters respectively, as a "major leap" in performance compared to their predecessors.

Meta claims that the Llama 3 models, trained on custom-built 24,000 GPU clusters, are among the best-performing generative AI models available for their respective parameter counts. The company supports this claim by citing the models' scores on popular AI benchmarks such as MMLU, ARC, and DROP, which attempt to measure knowledge, skill acquisition, and reasoning abilities. Despite the ongoing debate about the usefulness and validity of these benchmarks, they remain one of the few standardized methods for evaluating AI models. Llama 3 8B outperforms other open-source models like Mistral's Mistral 7B and Google's Gemma 7B on at least nine benchmarks, showcasing its potential in various domains such as biology, physics, chemistry, mathematics, and commonsense reasoning.

TechCrunch adds: Now, Mistral 7B and Gemma 7B aren't exactly on the bleeding edge (Mistral 7B was released last September), and in a few of benchmarks Meta cites, Llama 3 8B scores only a few percentage points higher than either. But Meta also makes the claim that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models including Gemini 1.5 Pro, the latest in Google's Gemini series.
Google

Google is Combining Its Android and Hardware Teams (theverge.com) 12

Google CEO Sundar Pichai announced substantial internal reorganizations on Thursday, including the creation of a new team called "Platforms and Devices" that will oversee all of Google's Pixel products, all of Android, Chrome, ChromeOS, Photos, and more. From a report: The team will be run by Rick Osterloh, who was previously the SVP of devices and services, overseeing all of Google's hardware efforts. Hiroshi Lockheimer, the longtime head of Android, Chrome, and ChromeOS, will be taking on other projects inside of Google and Alphabet. This is a huge change for Google, and it likely won't be the last one. There's only one reason for all of it, Osterloh says: AI. "This is not a secret, right?" he says.

Consolidating teams "helps us to be able to do full-stack innovation when that's necessary," Osterloh says. He uses the example of the Pixel camera: "You had to have deep knowledge of the hardware systems, from the sensors to the ISPs, to all layers of the software stack. And, at the time, all the early HDR and ML models that were doing camera processing... and I think that hardware / software / AI integration really showed how AI could totally transform a user experience. That was important. And it's even more true today."

United States

US Air Force Confirms First Successful AI Dogfight (theverge.com) 69

The US Air Force is putting AI in the pilot's seat. In an update on Thursday, the Defense Advanced Research Projects Agency (DARPA) revealed that an AI-controlled jet successfully faced a human pilot during an in-air dogfight test carried out last year. From a report: DARPA began experimenting with AI applications in December 2022 as part of its Air Combat Evolution (ACE) program. It worked to develop an AI system capable of autonomously flying a fighter jet, while also adhering to the Air Force's safety protocols. After carrying out dogfighting simulations using the AI pilot, DARPA put its work to the test by installing the AI system inside its experimental X-62A aircraft. That allowed it to get the AI-controlled craft into the air at the Edwards Air Force Base in California, where it says it carried out its first successful dogfight test against a human in September 2023.
Robotics

Boston Dynamics' New Atlas Robot Is a Swiveling, Shape-Shifting Nightmare (theverge.com) 57

Jess Weatherbed reports via The Verge: It's alive! A day after announcing it was retiring Atlas, its hydraulic robot, Boston Dynamics has introduced a new, all-electric version of its humanoid machine. The next-generation Atlas robot is designed to offer a far greater range of movement than its predecessor. Boston Dynamics wanted the new version to show that Atlas can keep a humanoid form without limiting "how a bipedal robot can move." The new version has been redesigned with swiveling joints that the company claims make it "uniquely capable of tackling dull, dirty, and dangerous tasks."

The teaser showcasing the new robot's capabilities is as unnerving as it is theatrical. The video starts with Atlas lying in a cadaver-like fashion on the floor before it swiftly folds its legs backward over its body and rises to a standing position in a manner befitting some kind of Cronenberg body-horror flick. Its curved, illuminated head does add some Pixar lamp-like charm, but the way Atlas then spins at the waist and marches toward the camera really feels rather jarring. The design itself is also a little more humanoid. Similar to bipedal robots like Tesla's Optimus, the new Atlas now has longer limbs, a straighter back, and a distinct "head" that can swivel around as needed. There are no cables in sight, and its "face" includes a built-in ring light. It is a marked improvement on its predecessor and now features a bunch of Boston Dynamics' new AI and machine learning tools. [...] Boston Dynamics said the new Atlas will be tested with a small group of customers "over the next few years," starting with Hyundai.

AI

Feds Appoint 'AI Doomer' To Run US AI Safety Institute 25

An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said.

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully."
"In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement."

Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."
Security

Hackers Voice Cloned the CEO of LastPass For Attack (futurism.com) 14

An anonymous reader quotes a report from Futurism: In a new blog post from LastPass, the password management firm used by countless personal and corporate clients to help protect their login information, the company explains that someone used AI voice-cloning tech to spoof the voice of its CEO in an attempt to trick one of its employees. As the company writes in the post, one of its employees earlier this week received several WhatsApp communications -- including calls, texts, and a voice message -- from someone claiming to be its CEO, Karim Toubba. Luckily, the LastPass worker didn't fall for it because the whole thing set off so many red flags. "As the attempted communication was outside of normal business communication channels and due to the employee's suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency)," the post reads, "our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally."

While this LastPass scam attempt failed, those who follow these sorts of things may recall that the company has been subject to successful hacks before. In August 2022, as a timeline of the event compiled by the Cybersecurity Dive blog detailed, a hacker compromised a LastPass engineer's laptop and used it to steal source code and company secrets, eventually getting access to its customer database -- including encrypted passwords and unencrypted user data like email addresses. According to that timeline, the clearly-resourceful bad actor remained active in the company's servers for months, and it took more than two months for LastPass to admit that it had been breached. More than six months after the initial breach, Toubba, the CEO, provided a blow-by-blow timeline of the months-long attack and said he took "full responsibility" for the way things went down in a February 2023 blog post.

AI

AI Computing Is on Pace To Consume More Energy Than India, Arm Says (yahoo.com) 49

AI's voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Chief Executive Officer Rene Haas. From a report: By 2030, the world's data centers are on course to use more electricity than India, the world's most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said.

"We are still incredibly in the early days in terms of the capabilities," Haas said in an interview. For AI systems to get better, they will need more training -- a stage that involves bombarding the software with data -- and that's going to run up against the limits of energy capacity, he said.

Security

A Spy Site Is Scraping Discord and Selling Users' Messages (404media.co) 49

404 Media: An online service is scraping Discord servers en masse, archiving and tracking users' messages and activity across servers including what voice channels they join, and then selling access to that data for as little as $5. Called Spy Pet, the service's creator says it scrapes more than ten thousand Discord servers, and besides selling access to anyone with cryptocurrency, is also offering the data for training AI models or to assist law enforcement agencies, according to its website.

The news is not only a brazen abuse of Discord's platform, but also highlights that Discord messages may be more susceptible to monitoring than ordinary users assume. Typically, a Discord user's activity is spread across disparate servers, with no one entity, except Discord itself, able to see what messages someone has sent across the platform more broadly. With Spy Pet, third-parties including stalkers or potentially police can look up specific users and see what messages they've posted on various servers at once. "Have you ever wondered where your friend hangs out on Discord? Tired of basic search tools like Discord.id? Look no further!" Spy Pet's website reads. It claims to be tracking more than 14,000 servers, 600 million users, and includes a database of more than 3 billion messages.

AI

State Tax Officials Are Using AI To Go After Wealthy Payers (cnbc.com) 103

State tax collectors, particularly in New York, have intensified their audit efforts on high earners, leveraging artificial intelligence to compensate for a reduced number of auditors. CNBC reports: In New York, the tax department reported 771,000 audits in 2022 (the latest year available), up 56% from the previous year, according to the state Department of Taxation and Finance. At the same time, the number of auditors in New York declined by 5% to under 200 due to tight budgets. So how is New York auditing more people with fewer auditors? Artificial Intelligence.

"States are getting very sophisticated using AI to determine the best audit candidates," said Mark Klein, partner and chairman emeritus at Hodgson Russ LLP. "And guess what? When you're looking for revenue, it's not going to be the person making $10,000 a year. It's going to be the person making $10 million." Klein said the state is sending out hundreds of thousands of AI-generated letters looking for revenue. "It's like a fishing expedition," he said.

Most of the letters and calls focused on two main areas: a change in tax residency and remote work. During Covid many of the wealthy moved from high-tax states like California, New York, New Jersey and Connecticut to low-tax states like Florida or Texas. High earners who moved, and took their tax dollars with them, are now being challenged by states who claim the moves weren't permanent or legitimate. Klein said state tax auditors and AI programs are examining cellphone records to see where the taxpayers spent most of their time and lived most of their lives. "New York is being very aggressive," he said.

AI

'Crescendo' Method Can Jailbreak LLMs Using Seemingly Benign Prompts (scmagazine.com) 45

spatwei shares a report from SC Magazine: Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday. Microsoft first revealed the "Crescendo" LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI's ChatGPT, Google's Gemini, Meta's LlaMA or Anthropic's Claude, to produce an output that would normally be filtered and refused by the LLM model. For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM's previous outputs, follow up with questions about how they were made in the past.

The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called "Crescendomation," which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants. Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its "AI Watchdog" and "AI Spotlight" features.

IOS

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side (appleinsider.com) 58

According to Bloomberg's Mark Gurman, Apple's initial set of AI-related features in iOS 18 "will work entirely on device," and won't connect to cloud services. AppleInsider reports: In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing. AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline. [...] Apple will reveal its AI plans during WWDC, which starts on June 10.
Microsoft

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

Slashdot Top Deals