×
Government

Are AI-Generated Search Results Still Protected by Section 230? (msn.com) 27

Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it...

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness."

The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online."

The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot."

The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...
AI

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi (ocregister.com) 36

Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car.
The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration.

In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco.

in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco.

Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."
AI

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities (schneier.com) 30

Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper.

"Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way.

Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection.

Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...?

Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet.

Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task.

"But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."
AI

'Openwashing' 26

An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.)

In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...]

The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.
The Military

Palantir's First-Ever AI Warfare Conference (theguardian.com) 35

An anonymous reader quotes a report from The Guardian, written by Caroline Haskins: On May 7th and 8th in Washington, D.C., the city's biggest convention hall welcomed America's military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that's not how they would describe it. It was the inaugural "AI Expo for National Competitiveness," hosted by the Special Competitive Studies Project -- better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces.

The conference hall was also filled with booths representing the U.S. military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software. At industry conferences like these, powerful people tend to be more unfiltered – they assume they're in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war?

Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one. I've written about relationships between tech companies and the military before, so I shouldn't have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body.
Some of the noteworthy quotes from the panel and convention, as highlighted in Haskins' reporting, include:

"It's always great when the CIA helps you out," Schmidt joked when CIA deputy director David Cohen lent him his microphone when his didn't work.

The U.S. has to "scare our adversaries to death" in war, said Karp. On university graduates protesting Israel's war in Gaza, Karp described their views as a "pagan religion infecting our universities" and "an infection inside of our society."

"The peace activists are war activists," Karp insisted. "We are the peace activists."

A huge aspect of war in a democracy, Karp went on to argue, is leaders successfully selling that war domestically. "If we lose the intellectual debate, you will not be able to deploy any armies in the west ever," Karp said.

A man in nuclear weapons research jokingly referred to himself as "the new Oppenheimer."
Businesses

OpenAI Strikes Reddit Deal To Train Its AI On Your Posts (theverge.com) 42

Emilia David reports via The Verge: OpenAI has signed a deal for access to real-time content from Reddit's data API, which means it can surface discussions from the site within ChatGPT and other new products. It's an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million. The deal will also "enable Reddit to bring new AI-powered features to Redditors and mods" and use OpenAI's large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit.

No financial terms were revealed in the blog post announcing the arrangement, and neither company mentioned training data, either. That last detail is different from the deal with Google, where Reddit explicitly stated it would give Google "more efficient ways to train models." There is, however, a disclosure mentioning that OpenAI CEO Sam Altman is also a shareholder in Reddit but that "This partnership was led by OpenAI's COO and approved by its independent Board of Directors."
"Reddit has become one of the internet's largest open archives of authentic, relevant, and always up-to-date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they're looking for, and helps new audiences find community on Reddit," Reddit CEO Steve Huffman says.

Reddit stock has jumped on news of the deal, rising 13% on Friday to $63.64. As Reuters notes, it's "within striking distance of the record closing price of $65.11 hit in late-March, putting the company on track to add $1.2 billion to its market capitalization."
Privacy

User Outcry As Slack Scrapes Customer Data For AI Model Training (securityweek.com) 33

New submitter txyoji shares a report: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models. By default, and without requiring users to opt-in, Slack said its systems have been analyzing customer data and usage information (including messages, content and files) to build AI/ML models to improve the software.

The company insists it has technical controls in place to block Slack from accessing the underlying content and promises that data will not lead across workplaces but, despite these assurances, corporate Slack admins are scrambling to opt-out of the data scraping. This line in Slack's communication sparked a social media controversy with the realization that content in direct messages and other sensitive content posted to Slack was being used to develop AI/ML models and that opting out world require sending e-mail requests: "If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line 'Slack global model opt-out request'. We will process your request and respond once the opt-out has been completed."

AI

OpenAI's Long-Term AI Risk Team Has Disbanded (wired.com) 21

An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts.

Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

Cloud

Companies Are So Desperate For Data Centers They're Leasing Them Before They're Even Built (sherwood.news) 23

Data center construction levels are at an all-time high. And more than ever, companies that need them have already called dibs. From a report: In the first quarter of 2024, what amounts to about half of the existing supply of data center megawattage in the US is under construction, according to real estate services firm CBRE. And 84% of that is already leased. Typically that rate had been about 50% the last few years -- already notably higher than other real estate classes. "I'm astonished and impressed by the demand for facilities yet to be fully constructed," CBRE Data Center Research Director Gordon Dolven told Sherwood.

That advanced interest means that despite the huge amount of construction, there's still going to be a shortage of data centers to meet demand. In other words, data center vacancy rates are staying low and rents high. Nationwide the vacancy rates are near record lows of 3.7% and average asking rent for data centers was up 19% year over year, according to CBRE. It was up 42% in Northern Virginia, where many data centers are located. These sorts of price jumps are "unprecedented" compared with other types of real estate. For comparison, rents for industrial and logistics real estate, another hot asset class used in e-commerce, is expected to go up 8% this year.

Operating Systems

NetBSD Bans AI-Generated Code (netbsd.org) 61

Seven Spirals writes: NetBSD committers are now banned from using any AI-generated code from ChatGPT, CoPilot, or other AI tools. Time will tell how this plays out with both their users and core team. "If you commit code that was not written by yourself, double check that the license on that code permits import into the NetBSD source repository, and permits free distribution," reads NetBSD's updated commit guidelines. "Check with the author(s) of the code, make sure that they were the sole author of the code and verify with them that they did not copy any other code. Code generated by a large language model or similar technology, such as GitHub/Microsoft's Copilot, OpenAI's ChatGPT, or Facebook/Meta's Code Llama, is presumed to be tainted code, and must not be committed without prior written approval by core."
Sony

Sony Lays Down the Gauntlet on AI 37

Sony Music Group, one of the world's biggest record labels, warned AI companies and music streaming platforms not to use the company's content without explicit permission. From a report: Sony Music, whose artists include Lil Nas X and Celine Dion, sent letters to more than 700 companies in an effort to protect its intellectual property, which includes album cover art, metadata, musical compositions and lyrics, from being used for training AI models. "Unauthorized use" of Sony Music Group content in the "training, development or commercialization of AI systems" deprives the company and its artists of control and compensation for those works, according to the letter, which was obtained by Bloomberg News.

[...] Sony Music, along with the rest of the industry, is scrambling to balance the creative potential of the fast-moving technology while also protecting artists' rights and its own profits. "We support artists and songwriters taking the lead in embracing new technologies in support of their art," Sony Music Group said in statement Thursday. "However, that innovation must ensure that songwriters' and recording artists' rights, including copyrights, are respected."
AI

Hugging Face Is Sharing $10 Million Worth of Compute To Help Beat the Big AI Companies (theverge.com) 10

Kylie Robison reports via The Verge: Hugging Face, one of the biggest names in machine learning, is committing $10 million in free shared GPUs to help developers create new AI technologies. The goal is to help small developers, academics, and startups counter the centralization of AI advancements. [...] Delangue is concerned about AI startups' ability to compete with the tech giants. Most significant advancements in artificial intelligence -- like GPT-4, the algorithms behind Google Search, and Tesla's Full Self-Driving system -- remain hidden within the confines of major tech companies. Not only are these corporations financially incentivized to keep their models proprietary, but with billions of dollars at their disposal for computational resources, they can compound those gains and race ahead of competitors, making it impossible for startups to keep up. Hugging Face aims to make state-of-the-art AI technologies accessible to everyone, not just the tech giants. [...]

Access to compute poses a significant challenge to constructing large language models, often favoring companies like OpenAI and Anthropic, which secure deals with cloud providers for substantial computing resources. Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU. The shared GPUs are accessible to multiple users or applications concurrently, eliminating the need for each user or application to have a dedicated GPU. ZeroGPU will be available via Hugging Face's Spaces, a hosting platform for publishing apps, which has over 300,000 AI demos created so far on CPU or paid GPU, according to the company.

Access to the shared GPUs is determined by usage, so if a portion of the GPU capacity is not actively utilized, that capacity becomes available for use by someone else. This makes them cost-effective, energy-efficient, and ideal for community-wide utilization. ZeroGPU uses Nvidia A100 GPU devices to power this operation -- which offer about half the computation speed of the popular and more expensive H100s. "It's very difficult to get enough GPUs from the main cloud providers, and the way to get them -- which is creating a high barrier to entry -- is to commit on very big numbers for long periods of times," Delangue said. Typically, a company would commit to a cloud provider like Amazon Web Services for one or more years to secure GPU resources. This arrangement disadvantages small companies, indie developers, and academics who build on a small scale and can't predict if their projects will gain traction. Regardless of usage, they still have to pay for the GPUs. "It's also a prediction nightmare to know how many GPUs and what kind of budget you need," Delangue said.

Apple

Samsung Mocks Apple's Controversial 'Crush' Ad With 'UnCrush' Pitch 64

Samsung has released a response to Apple's recently criticized "Crush" ad, which featured the destruction of instruments, arcade games, and sculptures to promote the new iPad Pro. Apple subsequently apologized, with an executive admitting they "missed the mark."

In a video titled "UnCrush," created by BBH USA and directed by Zen Pace, Samsung depicts a woman navigating debris reminiscent of Apple's ad, using a Galaxy Tab S9 and Galaxy AI to play guitar, in contrast to Apple's destructive message. "We would never crush creativity," the caption of Samsung's video reads.
Google

Revolutionary New Google Feature Hidden Under 'More' Tab Shows Links To Web Pages (404media.co) 32

An anonymous reader shares a report: After launching a feature that adds more AI junk than ever to search results, Google is experimenting with a radical new feature that lets users see only the results they were looking for, in the form of normal text links. As in, what most people actually use Google for. "We've launched a new 'Web' filter that shows only text-based links, just like you might filter to show other types of results, such as images or videos," the official Google Search Liaison Twitter account, run by Danny Sullivan, posted on Tuesday. The option will appear at the top of search results, under the "More" option.

"We've added this after hearing from some that there are times when they'd prefer to just see links to web pages in their search results, such as if they're looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features," Sullivan wrote. "If you're in that group, enjoy!" Searching Google has become a bloated, confusing experience for users in the last few years, as it's gradually started prioritizing advertisements and sponsored results, spammy affiliate content, and AI-generated web pages over authentic, human-created websites.

Microsoft

'Microsoft's Quest For Short-Term $$$ is Doing Long-Term Damage To Windows, Surface, Xbox, and Beyond' (windowscentral.com) 65

In an op-ed on Windows Central, the site's co-managing editor Jez Corden laments Microsoft's "short-sighted" decision-making and "inconsistent" investment in its products and services, which he argues has led to a loss of trust among customers and missed opportunities in the tech industry. Despite Microsoft's advancements in AI and cloud computing, the company has made "baffling" decisions such as shutting down Windows Phone, under-investing in Xbox, and canceling promising Surface products.

The author argues that Microsoft's lack of commitment to security, customer support, and long-term quality has "damaged" its reputation and hindered its potential for growth. Examples include recent hacking scandals, poor customer service experiences, and the aggressive promotion of Microsoft Edge at the expense of user choice. The author also expresses concern over Microsoft's handling of the Xbox brand, particularly the decision to release exclusive games on PlayStation, which could undermine the reasons for customers to choose Xbox. The op-ed concludes that while Microsoft has the potential to be a leader in the tech industry, its pattern of short-sighted decisions and failure to learn from past mistakes has led to a growing sense of doubt among its customers and observers.
Microsoft

Microsoft Asks Hundreds of China-Based AI Staff To Consider Relocating Amid US-China Tensions (wsj.com) 36

Microsoft is asking hundreds of employees in its China-based cloud-computing and AI operations to consider transferring outside the country, as tensions between Washington and Beijing mount around the critical technology. WSJ: Such staff, mostly engineers with Chinese nationality, were recently offered the opportunity to transfer to countries including the U.S., Ireland, Australia and New Zealand, people familiar with the matter said. The company is asking about 700 to 800 people [non-paywalled link], who are involved in machine learning and other work related to cloud computing, one of the people said.ÂThe move by one of America's biggest cloud-computing and AI companies comes as the Biden administration seeks to put tighter curbs around China's capability to develop state-of-the-art AI. The White House is considering new rules that would require Microsoft and other U.S. cloud-computing companies to get licenses before giving Chinese customers access to AI chips.
Microsoft

Microsoft's AI Push Imperils Climate Goal As Carbon Emissions Jump 30% (bnnbloomberg.ca) 67

Microsoft's ambitious goal to be carbon negative by 2030 is threatened by its expanding AI operations, which have increased its carbon footprint by 30% since 2020. To meet its targets, Microsoft must quickly adopt green technologies and improve efficiency in its data centers, which are critical for AI but heavily reliant on carbon-intensive resources. Bloomberg reports: Now to meet its goals, the software giant will have to make serious progress very quickly in gaining access to green steel and concrete and less carbon-intensive chips, said Brad Smith, president of Microsoft, in an exclusive interview with Bloomberg Green. "In 2020, we unveiled what we called our carbon moonshot. That was before the explosion in artificial intelligence," he said. "So in many ways the moon is five times as far away as it was in 2020, if you just think of our own forecast for the expansion of AI and its electrical needs." [...]

Despite AI's ravenous energy consumption, this actually contributes little to Microsoft's hike in emissions -- at least on paper. That's because the company says in its sustainability report that it's 100% powered by renewables. Companies use a range of mechanisms to make such claims, which vary widely in terms of credibility. Some firms enter into long-term power purchase agreements (PPAs) with renewable developers, where they shoulder some of a new energy plant's risk and help get new solar and wind farms online. In other cases, companies buy renewable energy credits (RECs) to claim they're using green power, but these inexpensive credits do little to spur new demand for green energy, researchers have consistently found. Microsoft uses a mix of both approaches. On one hand, it's one of the biggest corporate participants in power purchase agreements, according to BloombergNEF, which tracks these deals. But it's also a huge purchaser of RECs, using these instruments to claim about half of its energy use is clean, according to its environmental filings in 2022. By using a large quantity of RECs, Microsoft is essentially masking an even larger growth in emissions. "It is Microsoft's plan to phase out the use of unbundled RECs in future years," a spokesperson for the company said. "We are focused on PPAs as a primary strategy."

So what else can be done? Smith, along with Microsoft's Chief Sustainability Officer Melanie Nakagawa, has laid out clear steps in the sustainability report. High among them is to increase efficiency, which is to use the same amount of energy or computing to do more work. That could help reduce the need for data centers, which will reduce emissions and electricity use. On most things, "our climate goals require that we spend money," said Smith. "But efficiency gains will actually enable us to save money." Microsoft has also been at the forefront of buying sustainable aviation fuels that has helped reduce some of its emissions from business travel. The company also wants to partner with those who will "accelerate breakthroughs" to make greener steel, concrete and fuels. Those technologies are starting to work at a small scale, but remain far from being available in commercial quantities even if expensive. Cheap renewable power has helped make Microsoft's climate journey easier. But the tech giant's electricity consumption last year rivaled that of a small European country -- beating Slovenia easily. Smith said that one of the biggest bottlenecks for it to keep getting access to green power is the lack of transmission lines from where the power is generated to the data centers. That's why Microsoft says it's going to increase lobbying efforts to get governments to speed up building the grid.
If Microsoft's emissions remain high going into 2030, Smith said the company may consider bulk purchases of carbon removal credits, even though it's not "the desired course."

"You've got to be willing to invest and pay for it," said Smith. Climate change is "a problem that humanity created and that humanity can solve."
Apple

Apple Brings Eye-Tracking To Recent iPhones and iPads (engadget.com) 35

This week, in celebration of Global Accessibility Awareness Day, Apple is introducing several new accessibility features. Noteworthy additions include eye-tracking support for recent iPhone and iPad models, customizable vocal shortcuts, music haptics, and vehicle motion cues. Engadget reports: The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware. [...]

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
Apple detailed all the new features in a press release.
Android

Android 15 Gets 'Private Space,' Theft Detection, and AV1 Support (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica: Google's I/O conference is still happening, and while the big keynote was yesterday, major Android beta releases have apparently been downgraded to Day 2 of the show. Google really seems to want to be primarily an AI company now. Android already had some AI news yesterday, but now that the code-red requirements have been met, we have actual OS news. One of the big features in this release is "Private Space," which Google says is a place where users can "keep sensitive apps away from prying eyes, under an additional layer of authentication."

First, there's a new hidden-by-default portion of the app drawer that can hold these sensitive apps, and revealing that part of the app drawer requires a second round of lock-screen authentication, which can be different from the main phone lock screen. Just like "Work" apps, the apps in this section run on a separate profile. To the system, they are run by a separate "user" with separate data, which your non-private apps won't be able to see. Interestingly, Google says, "When private space is locked by the user, the profile is paused, i.e., the apps are no longer active," so apps in a locked Private Space won't be able to show notifications unless you go through the second lock screen.

Another new Android 15 feature is "Theft Detection Lock," though it's not in today's beta and will be out "later this year." The feature uses accelerometers and "Google AI" to "sense if someone snatches your phone from your hand and tries to run, bike, or drive away with it." Any of those theft-like shock motions will make the phone auto-lock. Of course, Android's other great theft prevention feature is "being an Android phone." Android 12L added a desktop-like taskbar to the tablet UI, showing recent and favorite apps at the bottom of the screen, but it was only available on the home screen and recent apps. Third-party OEMs immediately realized that this bar should be on all the time and tweaked Android to allow it. In Android 15, an always-on taskbar will be a normal option, allowing for better multitasking on tablets and (presumably) open foldable phones. You can also save split-screen-view shortcuts to the taskbar now.

An Android 13 developer feature, predictive back, will finally be turned on by default. When performing the back gesture, this feature shows what screen will show up behind the current screen you're swiping away. This gives a smoother transition and a bit of a preview, allowing you to cancel the back gesture if you don't like where it's going. [...] Because this is a developer release, there are tons of under-the-hood changes. Google is a big fan of its own next-generation AV1 video codec, and AV1 support has arrived on various devices thanks to hardware decoding being embedded in many flagship SoCs. If you can't do hardware AV1 decoding, though, Android 15 has a solution for you: software AV1 decoding.

AI

Senators Urge $32 Billion in Emergency Spending on AI After Finishing Yearlong Review (apnews.com) 109

A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop AI and place safeguards around it, writing in a report released Wednesday that the U.S. needs to "harness the opportunities and address the risks" of the quickly developing technology. AP: The group of two Democrats and two Republicans said in an interview Tuesday that while they sometimes disagreed on the best paths forward, it was imperative to find consensus with the technology taking off and other countries like China investing heavily in its development. They settled on a raft of broad policy recommendations that were included in their 33-page report. While any legislation related to AI will be difficult to pass, especially in an election year and in a divided Congress, the senators said that regulation and incentives for innovation are urgently needed.

Slashdot Top Deals