Earth

Carbon Record Reveals Evidence of Extensive Human Fire Use 50,000 Years Ago (phys.org) 29

"It has long been unclear when humans started using fire," writes Phys.org... To address this question, researchers from the Institute of Oceanology of the Chinese Academy of Sciences (IOCAS), alongside collaborators from China, Germany, and France, analyzed the pyrogenic carbon record in a 300,000-year-old sediment core from the East China Sea. "Our findings challenge the widely held belief that humans only began influencing the environment with fire in the recent past, during the Holocene," said Dr. Zhao Debo, the study's corresponding author.

This study, published in the Proceedings of the National Academy of Sciences, highlights the presence of charred plant remains — known as pyrogenic carbon — formed when vegetation burns but is not completely consumed by fire. The research reveals a notable increase in fire activity across East Asia approximately 50,000 years ago. This finding aligns with earlier reports of heightened fire activities in Europe, Southeast Asia, and the Papua New Guinea-Australia region respectively, suggesting a continental-scale intensification of fire use during this period... The study highlights that this global rise in fire use coincides with the rapid spread of Homo sapiens, increasing population densities, and a greater reliance on fire, particularly amid cold, glacial conditions...

These conclusions have significant implications for understanding Earth's sensitivity to human impacts. If human fire management altered atmospheric carbon levels tens of thousands of years ago, current climate models may underestimate the historical baseline of human-environment interactions.

AI

Ask Slashdot: Do You Use AI - and Is It Actually Helpful? 247

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.

So what's the point?

Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"

And if that's the case, then when you add up all that correction time... "Is it actually helpful?"

Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
NASA

Mysterious Radio Burst Turns Out to Be From a Dead 1967 NASA Satellite (smithsonianmag.com) 29

An anonymous reader shared this report from Smithsonian magazine: Last year, Australian scientists picked up a mysterious burst of radio waves that briefly appeared brighter than all other signals in the sky. Now, the researchers have discovered the blast didn't come from a celestial object, but a defunct satellite orbiting Earth... "We got all excited, thinking maybe we'd discovered a new pulsar or some other object," says Clancy James, a researcher at Australia's Curtin University who is on the Australian Square Kilometer Array Pathfinder (ASKAP) team, to Alex Wilkins at New Scientist. After taking a closer look, however, the team realized that the only viable source for the burst was NASA's dead Relay 2, a short-lived satellite that hasn't been in operation since 1967....

The researchers also discovered that at the time of the event, the satellite was only around 2,800 miles away from Earth, which explains why the signal appeared so strong. The reason behind Relay 2's sudden burst is not clear, but the team has come up with two potential explanations — and neither involves the satellite coming back to life like a zombie. One relates to electrostatic discharge — a build-up of electricity that can result in a sudden blast. Spacecraft get charged with electricity when they pass through plasma, and once enough charge accumulates, it can create a spark. "New spacecraft are built with materials to reduce the build-up of charge, but when Relay 2 was launched, this wasn't well-understood," explains James to Space.com's Robert Lea. The other idea is that a micrometeorite hit the satellite, releasing a small cloud of plasma and radio waves.

Karen Aplin, a space scientist at the University of Bristol in England who was not involved in the study, tells New Scientist that it would be tough to differentiate between signals produced by each of those two scenarios, because they would look very similar. The researchers say they favor the first idea, however, because micrometeorites the size of the one that could have caused the signal are uncommon.

"Their findings were published in a pre-print paper on the arXiv server that has not yet been peer-reviewed."
Linux

New Linux Kernel Drama: Torvalds Drops Bcachefs Support After Clash (itsfoss.com) 116

Bcachefs "pitches itself as a filesystem that 'doesn't eat your data'," writes the open source/Linux blog It's FOSS. Although it was last October that Bcachefs developer Kent Overstreet was restricted from participating in the Linux 6.13 kernel development cycle (after ending a mailing list post with "Get your head examined. And get the fuck out of here with this shit.")

And now with the upcoming Linux kernel 6.17 release, Linus Torvalds has decided to drop Bcachefs support, they report, "owing to growing tensions" with Overstreet: The decision follows a series of disagreements over how fixes and changes for it were submitted during the 6.16 release cycle... Kent filed a pull request to add a new feature called "journal-rewind". It was meant to improve bcachefs repair functionality, but it landed during the release candidate (RC) phase, a time usually reserved for bug fixes, not new features, as Linus pointed out. [Adding "I remain steadfastly convinced that anybody who uses bcachefs is expecting it to be experimental. They had better."]

Theodore Ts'o, a long-time kernel developer and maintainer of ext4, also chimed in, saying that Kent's approach risks introducing regressions, especially when changes affect sensitive parts of a filesystem like journaling. He reminded Kent that the rules around the merge window have been a long-standing consensus in the kernel community, and it's Linus's job to enforce them. After some more back and forth, Kent pushed back, arguing that the rules around the merge window aren't absolute and should allow for flexibility, even more so when user data is at stake. He then went ahead and resubmitted the patch, citing instances from XFS and Btrfs where similar fixes made it into the kernel during RCs. Linus merged it into his tree, but ultimately decided to drop Bcachefs entirely in the 6.17 merge window.

To which Kent responded by clarifying that he wasn't trying to shut Linus out of Bcachefs' decisions, stressing that he values Linus's input...

This of course follows the great Torvalds-Overstreet "filesystem people never learn" throwdown back in April.
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 174

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

Crime

Sinaloa Cartel Used Phone Data and Surveillance Cameras To Find and Kill FBI Informants in 2018, DOJ Says (aol.com) 36

Designated as a foreign terrorist group by multiple countries, Mexico's Sinaloa drug cartel fiercely defends its transnational organized crime syndicate.

"A hacker working for the Sinaloa drug cartel was able to obtain an FBI official's phone records," reports Reuters, "and use Mexico City's surveillance cameras to help track and kill the agency's informants in 2018, the U.S. Justice Department said in a report issued on Thursday." The incident was disclosed in a Justice Department Inspector General's audit of the FBI's efforts to mitigate the effects of "ubiquitous technical surveillance," a term used to describe the global proliferation of cameras and the thriving trade in vast stores of communications, travel, and location data... The report said the hacker identified an FBI assistant legal attaché at the U.S. Embassy in Mexico City and was able to use the attaché's phone number "to obtain calls made and received, as well as geolocation data."

The report said the hacker also "used Mexico City's camera system to follow the (FBI official) through the city and identify people the (official) met with." The report said "the cartel used that information to intimidate and, in some instances, kill potential sources or cooperating witnesses."

IT

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash (fool.com) 24

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider."

Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary.
Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.")

And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June.

This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation.

Maybe everyone's just practicing their language skills with ChatGPT?
Space

Leak Stops on the International Space Station. But NASA Engineers Still Worry (cnn.com) 25

On the International Space Station, air has been slowly leaking out for years from a Russia-controlled module, reports CNN. But recently "station operators realized the gradual, steady leak had stopped. And that raised an even larger concern." It's possible that efforts to seal cracks in the module's exterior wall have worked, and the patches are finally trapping air as intended. But, according to NASA, engineers are also concerned that the module is actually holding a stable pressure because a new leak may have formed on an interior wall — causing air from the rest of the orbiting laboratory to begin rushing into the damaged area. Essentially, space station operators are worried that the entire station is beginning to lose air.

Much about this issue is unknown. NASA revealed the concerns in a June 14 statement. The agency said it would delay the launch of the private Ax-4 mission, carried out by SpaceX and Houston-based company Axiom Space, as station operators worked to pinpoint the problem. "By changing pressure in the transfer tunnel and monitoring over time, teams are evaluating the condition of the transfer tunnel and the hatch seal," the statement read.

More than a week later, the results of that research are not totally clear. After revealing the new Wednesday launch target Monday night, NASA said in a Tuesday statement that it worked with Roscosmos officials to investigate the issue. The space agencies agreed to lower the pressure in the transfer tunnel, and "teams will continue to evaluate going forward," according to the statement... The cracks are minuscule and mostly invisible to the naked eye, hence the difficulty attempting to patch problem areas.

Axiom Space launched four astronauts to the International Space Station on Wednesday.

But its four-person crew had previously "remained locked in quarantine in Florida for about a month, waiting for their chance to launch," notes CNN, as NASA and the Russian space agency Roscosmos "attempted to sort through" the leak issue.
AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

Transportation

Researchers Accuse Uber of Using Opaque Algorithm To Dramatically Boost Its Profits (theguardian.com) 48

"A second major academic institution has accused Uber of using opaque computer code to dramatically increase its profits at the expense of the ride-hailing app's drivers and passengers," reports the Guardian: Research by academics at New York's Columbia Business School concluded that the Silicon Valley company had implemented "algorithmic price discrimination" that had raised "rider fares and cut driver pay on billions of ... trips, systematically, selectively, and opaquely". The Ivy League business school research — which is based on an analysis of "tens of thousands of trips ... as well as an analysis of over 2 million ... trip requests" — follows a similar academic paper based on 1.5m UK trips that was published last week by the University of Oxford. The British study found that many UK Uber drivers were making "substantially less" each hour since the ride-hailing app introduced a "dynamic pricing" algorithm in 2023 that coincided with the company taking a significantly higher share of fares...

[Len Sherman, the US report's author] added: "Since implementing upfront pricing, Uber has increased rider prices, has cut driver pay, has increased its take rates, and, of course, has greatly improved its cashflow during the period covered by this study...." The Columbia paper, which focused on 24,532 trips made by a single US Uber driver, concluded that the introduction of the new algorithm had allowed Uber to "significantly increase its take rate — the per cent of rider fares net of driver pay captured by the company — from about 32% at the start of upfront pricing to upwards of 42% by the end of 2024". Last week's University of Oxford research found that, since the launch of dynamic pricing, Uber's median take rate per UK driver had "increased from 25% to 29%, and on some trips ... is over 50%".

Thanks to Slashdot reader votsalo for sharing the news.
X

X11 Fork XLibre Released For Testing On Systemd-Free Artix Linux (webpronews.com) 131

An anonymous reader shared this report from WebProNews: The Linux world is abuzz with news of XLibre, a fork of the venerable X11 window display system, which aims to be an alternative to X11's successor, Wayland.

Much of the Linux world is working to adopt Wayland, the successor to X11. Wayland has been touted as being a superior option, providing better security and performance. Despite Fedora and Ubuntu both going Wayland-only, the newer display protocol still lags behind X11, in terms of functionality, especially in the realm of accessibility, screen recording, session restore, and more. In addition, despite the promise of improved performance, many users report performance regressions compared to X11.

While progress is being made, it has been slow going, especially for a project that is more than 17 years old. To make matters worse, Wayland is largely being improved by committee, with the various desktop environment teams trying to work together to further the protocol. Progress is further hampered by the fact that the GNOME developers often object to the implementation of some functionality that doesn't fit with their vision of what a desktop should be — despite those features being present and needed in every other environment.

In response, developer Enrico Weigelt has forked Xll into the XLibre project. Weigelt was already one of the most prolific X11 contributors at a time when little to no improvements or new features are being added to the aging window system... Weigelt has wasted no time releasing the inaugural version of XLibre, XLibre 25.0. The release includes a slew of improvements.

MrBrklyn (Slashdot reader #4,775) adds that Artix Linux, a rolling-release distro based on Arch Linux which does not use systemd, now offers XLibre ISO images and packages for testing and use. They're all non-systemd based, and "Its a decent undertaking by the Artix development team. The iso is considered to be testing but it is quickly moving to the regular repos for broad public use."
EU

How a Crewless, AI-Enhanced Vessel Will Patrol Denmark's and NATO's Waters (euronews.com) 5

After past damage to undersea cables, Denmark will boost their surveillance of Baltic Sea/North Sea waters by deploying four uncrewed surface vessels — about 10 meters long — that are equipped with drones and also AI, reports Euronews.

The founder/CEO of the company that makes the vessels — Saildrone — says they'll work "like a truck" that "carries the sensors." And then "we use on-board sophisticated machine learning and AI to fuse that data to give us a full picture of what's above and below the surface." Powered by solar and wind energy, they can operate autonomously for months at sea. [Saildrone] said the autonomous sailboats can support operations such as illegal fishing detection, border enforcement, and strategic asset protection... The four "Voyagers" will be first in operation for a three-month trial, as Denmark and NATO allies aim at extending maritime presence, especially around critical undersea infrastructure such as fibre optic cables and power lines. NATO and its allies have increased sea patrolling following several incidents.
Transportation

Mercedes-AMG to Drop Four-Cylinder for Inline-Sixes and V-8s (caranddriver.com) 79

"Mercedes-AMG is transitioning away from the four-cylinder plug-in hybrid powertrain," reports Car and Driver, "and back towards the inline-six and V-8 powertrains more traditionally associated with the brand." That isn't to say that AMG had a change of heart concerning the merits of the four-cylinder powertrain, but rather that the automaker is responding to customer criticisms. "Technically, the four-cylinder is one of the most advanced drivetrains available in a production car. It's also right up there on performance. But despite this, it failed to resonate with our traditional customers. We've recognized that," a source at Mercedes told Autocar...

Car and Driver also spoke with AMG chief Michael Schiebe at the reveal of the AMG GT XX electric concept car... Although the four-cylinder may be on its way out, Schiebe did say AMG remains committed to plug-in hybrids. "There are a lot of advantages of combining electric motors with combustion engines," Schiebe said. "We want to offer different kinds of drivetrain opportunities on the combustion side to our customers, so they can choose for whatever purpose they want to use the car."

Much of the criticism of the C63 and GLC63's powertrain was focused on the lackluster sound when compared with the symphony of a V-8. The M139 drew our ire for sounding "reedy" and "buzzy" in our test of the current C63. The C63's hybrid system also brings the car's curb weight up to nearly 5000 pounds, meaning it didn't provide a meaningful performance boost over its V-8 predecessor despite offering significantly more horsepower....

AMG wouldn't confirm exactly when the four-cylinder will be phased out, telling Autocar that it will remain in production for the time being before "eventually" being replaced.

Thanks to long-time Slashdot reader sinij for sharing the news.
Desktops (Apple)

After 27 Years, Engineer Discovers How To Display Secret Photo In Power Mac ROM (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Tuesday, software engineer Doug Brown published his discovery of how to trigger a long-known but previously inaccessible Easter egg in the Power Mac G3's ROM: a hidden photo of the development team that nobody could figure out how to display for 27 years. While Pierre Dandumont first documented the JPEG image itself in 2014, the method to view it on the computer remained a mystery until Brown's reverse engineering work revealed that users must format a RAM disk with the text "secret ROM image."

Brown stumbled upon the image while using a hex editor tool called Hex Fiend with Eric Harmon's Mac ROM template to explore the resources stored in the beige Power Mac G3's ROM. The ROM appeared in desktop, minitower, and all-in-one G3 models from 1997 through 1999. "While I was browsing through the ROM, two things caught my eye," Brown wrote. He found both the HPOE resource containing the JPEG image of team members and a suspicious set of Pascal strings in the PowerPC-native SCSI Manager 4.3 code that included ".Edisk," "secret ROM image," and "The Team."

The strings provided the crucial clue Brown needed. After extracting and disassembling the code using Ghidra, he discovered that the SCSI Manager was checking for a RAM disk volume named "secret ROM image." When found, the code would create a file called "The Team" containing the hidden JPEG data. Brown initially shared his findings on the #mac68k IRC channel, where a user named Alex quickly figured out the activation method. The trick requires users to enable the RAM Disk in the Memory control panel, restart, select the RAM Disk icon, choose "Erase Disk" from the Special menu, and type "secret ROM image" into the format dialog. "If you double-click the file, SimpleText will open it," Brown explains on his blog just before displaying the hidden team photo that emerges after following the steps.

Canada

Canada Orders Chinese Firm Hikvision To Cease Canadian Operations Over National Security Concerns (reuters.com) 45

The Canadian government has ordered Chinese surveillance camera manufacturer Hikvision to cease operations in Canada over national security concerns, Industry Minister Melanie Joly said late on Friday. From a report: Hikvision, also known as Hangzhou Hikvision Digital Technology Co, has faced numerous sanctions and restrictions by Canada's neighbor, the United States, over the past five and a half years for the firm's dealings and the use of its equipment in China's Xinjiang region, where rights groups have documented abuses against the Uyghur population and other Muslim communities.

"The government has determined that Hikvision Canada's continued operations in Canada would be injurious to Canada's national security," Joly said on X, adding that the decision was taken after a multi-step review of information provided by Canada's security and intelligence community."

Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

Japan

Japan's Civil War Over Surnames (economist.com) 85

Japanese politicians failed to pass legislation last month that would have allowed married couples to keep separate surnames, despite surveys showing majority public support for the change. Japan remains the only country requiring married couples by law to share the same surname, with women taking their husband's name in 95% of cases.

The ruling Liberal Democratic Party's skepticism blocked opposition bills aimed at reforming the system. Keidanren, Japan's largest business lobby, says the current law "hinders women's advancement" as name changes complicate professional reputations. A study by NGO Asuniwa suggests reform could prompt 590,000 cohabiting couples to marry legally, potentially boosting Japan's birth rate since strong stigmas discourage births outside marriage.

Some couples have developed workarounds. Teachers Uchiyama Yukari and Koike Yukio have divorced and remarried three times to sidestep the law, living unmarried most of the time but remarrying for each child's birth registration before divorcing again.
Medicine

7 People Now Have Neuralink Brain Implant 29

Seven people have now received Neuralink's N1 brain implant, which enables individuals with ALS or spinal cord injuries to control a computer with their thoughts. PCMag reports: In a February 2025 update, Neuralink confirmed that three people had received its brain-computer interface (BCI). That increased to five by June, when it also reported a $650 million funding round. We're now at seven, Barrow tweeted today; Neuralink retweeted that message.

Six of the seven are participating in the PRIME study, conducted by Barrow, which handles the implantations from its Phoenix, Arizona, office. It aims to prove that the N1 implant, the R1 surgical robot, and the N1 User App on the computer are safe and effective, according to the program brochure. (No BCIs have been approved by the US Food and Drug Administration.)

Participants in the study get the implant through a surgery in which a custom-built robotic arm drills a hole in their skull and implants the device. The implant connects to a computer via Bluetooth, allowing patients to move the cursor, select words to type, browse the web, and even play video games -- a favorite activity of Neuralink's first human patient, Noland Arbaugh, who can do this all without moving any limbs or fingers. [...] Arbaugh, now 31, became paralyzed during a diving accident. Other Neuralink patients include Alex, a former machine parts builder who lost function of his arms and uses his N1 Implant to design 3D machine parts with computer-aided design (CAD). The third patient is Brad, the first person with ALS to receive the N1 implant, according to Barrow.

Mike is the fourth patient, and "the first person with a full-time job to use the N1 Implant," Barrow says. "He worked as a survey technician for city government and spent the majority of his time in the field until his ALS made the work too difficult. Like Alex, Mike has used CAD software with his Neuralink device to continue doing survey work from home and provide for his family." The fifth publicly named patient is RJ, a veteran who became paralyzed after a motorcycle accident, according to the University of Miami. The other two patients remain anonymous, but we can expect Neuralink to continue recruiting more people (here's how to apply).
EU

Denmark To Tackle Deepfakes By Giving People Copyright To Their Own Features (theguardian.com) 48

An anonymous reader quotes a report from The Guardian: The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice. The Danish government said on Thursday it would strengthen protection against digital imitations of people's identities with what it believes to be the first law of its kind in Europe. Having secured broad cross-party agreement, the department of culture plans to submit a proposal to amend the current law for consultation before the summer recess and then submit the amendment in the autumn. It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.

The Danish culture minister, Jakob Engel-Schmidt, said he hoped the bill before parliament would send an "unequivocal message" that everybody had the right to the way they looked and sounded. He told the Guardian: "In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI." He added: "Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that."

The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent. It will also cover "realistic, digitally generated imitations" of an artist's performance without consent. Violation of the proposed rules could result in compensation for those affected. The government said the new rules would not affect parodies and satire, which would still be permitted.
"Of course this is new ground we are breaking, and if the platforms are not complying with that, we are willing to take additional steps," said Engel-Schmidt.

He expressed hope that other European countries will follow suit and warned that "severe fines" will be imposed if tech platforms fail to comply.

Slashdot Top Deals