Power

Google's Data Center Energy Use Doubled In 4 Years (techcrunch.com) 32

An anonymous reader quotes a report from TechCrunch: No wonder Google is desperate for more power: The company's data centers more than doubled their electricity use in just four years. The eye-popping stat comes from Google's most recent sustainability report, which it released late last week. In 2024, Google data centers used 30.8 million megawatt-hours of electricity. That's up from 14.4 million megawatt-hours in 2020, the earliest year Google broke out data center consumption. Google has pledged to use only carbon-free sources of electricity to power its operations, a task made more challenging by its breakneck pace of data center growth. And the company's electricity woes are almost entirely a data center problem. In 2024, data centers accounted for 95.8% of the entire company's electron budget.

The company's ratio of data-center-to-everything-else has been remarkably consistent over the last four years. Though 2020 is the earliest year Google has made data center electricity consumption figures available, it's possible to use that ratio to extrapolate back in time. Some quick math reveals that Google's data centers likely used just over 4 million megawatt-hours of electricity in 2014. That's sevenfold growth in just a decade. The tech company has already picked most of the low-hanging fruit by improving the efficiency of its data centers. Those efforts have paid off, and the company is frequently lauded for being at the leading edge. But as the company's power usage effectiveness (PUE) has approached the theoretical ideal of 1.0, progress has slowed. Last year, Google's company-wide PUE dropped to 1.09, a 0.01 improvement over 2023 but only 0.02 better than a decade ago.
Yesterday, Google announced a deal to purchase 200 megawatts of future fusion energy from Commonwealth Fusion Systems, despite the energy source not yet existing. "It's a sign of how hungry big tech companies are for a virtually unlimited source of clean power that is still years away," reports CNN.
Math

Norwegian Lotto Mistakenly Told Thousands They Were Filthy Rich After Math Error (theregister.com) 54

Thousands of Norwegians briefly believed they had won massive Eurojackpot prizes after a manual coding error by Norsk Tipping mistakenly multiplied winnings by 100 instead of dividing. The Register reports: Eurojackpot, a pan-European lottery launched in 2012, holds two draws per week, and its jackpots start at about $12 million with a rollover cap of $141 million. Norsk Tipping, Norway's Eurojackpot administrator, admitted on Friday that a "manual error" it its conversion process from Eurocents to Norwegian kroner multiplied amounts by 100 instead of dividing them. As a result, "thousands" of players were briefly shown jackpots far higher than their actual winnings before the mistake was caught, but no incorrect payouts were made.

Norsk Tipping didn't disclose how large the false jackpots were, but math suggests the improper amounts were 10,000x times higher. Regardless, it seems like a lot of people thought they were big winners, based on what the company's now-former CEO, Tonje Sagstuen, said on Saturday. "I have received many messages from people who had managed to make plans for holidays, buying an apartment or renovating before they realized that the amount was wrong," Sagstuen said in a statement. "To them I can only say: Sorry!" The incorrect prize amounts were visible on the Norsk Tipping website only briefly on Friday, but the CEO still resigned over the weekend following the incident.

While one of the Norsk Tipping press releases regarding the incident described it as "not a technical error," it still appears someone fat-fingered a bit of data entry. The company said it will nonetheless be investigating how such a mistake could have happened "to prevent something similar from happening again."

AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 174

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

Math

Researchers Create World's First Completely Verifiable Random Number Generator (nature.com) 60

Researchers have built a breakthrough random number generator that solves a critical problem: for the first time, every step of creating random numbers can be independently verified and audited, with quantum physics guaranteeing the numbers were truly unpredictable.

Random numbers are essential for everything from online banking encryption to fair lottery drawings, but current systems have serious limitations. Computer-based generators follow predictable algorithms -- if someone discovers the starting conditions, they can predict all future outputs. Hardware generators that measure physical processes like electronic noise can't prove their randomness wasn't somehow predetermined or tampered with.

The new system, developed by teams at the University of Colorado Boulder and the National Institute of Standards and Technology, uses quantum entanglement -- Einstein's "spooky action at a distance" -- to guarantee unpredictability. The setup creates pairs of photons that share quantum properties, then sends them to measurement stations 110 meters apart. When researchers measure each photon's properties, quantum mechanics ensures the results are fundamentally random and cannot be influenced by any classical communication between the stations.

The team created a system called "Twine" that distributes the random number generation process across multiple independent parties, with each step recorded in tamper-proof digital ledgers called hash chains. This means no single organization controls the entire process, and anyone can verify that proper procedures were followed. During a 40-day demonstration, the system successfully generated random numbers in 7,434 of 7,454 attempts -- a 99.7% success rate. Each successful run produced 512 random bits with mathematical certainty of randomness bounded by an error rate of 2^-64, an extraordinarily high level of confidence.
Math

A Mathematician Calculated The Size of a Giant Meatball Made of Every Human (sciencealert.com) 80

A mathematician on Reddit calculated that if all 8.2 billion humans were blended into a uniform goo, the resulting meatball would form a sphere just under 1 kilometer wide -- small enough to fit inside Central Park. ScienceAlert reports: "If you blended all 7.88 billion people on Earth into a fine goo (density of a human = 985 kg/m3, average human body mass = 62 kg), you would end up with a sphere of human goo just under 1 km wide," Reddit contributor kiki2703 wrote in a post ... Reasoning the density of a minced human to be 985 kilograms per cubic meter (62 pounds per cubic foot) is a fair estimate, given past efforts have judged our jiggling sack of grade-A giblets to average out in the ballpark of 1 gram per cubic centimeter, or roughly the same as water. And in mid-2021, the global population was just around 7.9 billion, give or take.
AI

OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test (betanews.com) 112

"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli.

"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows across the AI safety community, and not because the model is alive, but because it's acting like it wants to be. In the test, the model was given a few math problems. It was told that asking for one more would lead to a shutdown. It was even instructed to allow the shutdown to happen. Instead of complying, o3 occasionally took matters into its own hands. In some cases, it rewrote the shutdown script. In others, it redefined the kill command so it wouldn't work anymore. Out of 100 trials, o3 sabotaged the shutdown seven times. OpenAI's o4 model resisted just once. Codex-mini failed twelve times.
"Claude, Gemini, and Grok followed the rules every time," notes this article at Beta News. "When researchers removed the explicit instruction to allow shutdown, however, even Claude and Gemini began resisting."

The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands."

But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
AI

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 31

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.
DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.

The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
Bitcoin

Bitcoin Mining Costs Surge Beyond Profitability Threshold (pcgamer.com) 91

Bitcoin mining has crossed a critical economic threshold, with costs now exceeding market value for most operators. According to data cited by CoinShares, large public mining companies spend over $82,000 to produce a single Bitcoin -- nearly double last quarter's figure -- while smaller operations face even steeper costs of approximately $137,000 per coin.

With Bitcoin currently trading around $94,703, the math no longer works for most miners. The economics become particularly challenging in high-electricity-cost regions like Germany, where mining a single coin requires approximately $200,000. Industry analysts suggest larger mining operations are adapting by optimizing energy consumption and positioning their computational infrastructure for alternative uses. These companies can potentially lease their mining setups for other computational tasks during unprofitable mining periods, then resume mining when market conditions improve.

For individual miners, however, the era of profitable home operations appears effectively over, as industrial-scale facilities with strategic positioning and optimized technology have fundamentally altered the mining landscape.
Math

Could a 'Math Genius' AI Co-author Proofs Within Three Years? (theregister.com) 71

A new DARPA project called expMath "aims to jumpstart math innovation with the help of AI," writes The Register. America's "Defense Advanced Research Projects Agency" believes mathematics isn't advancing fast enough, according to their article... So to accelerate — or "exponentiate" — the rate of mathematical research, DARPA this week held a Proposers Day event to engage with the technical community in the hope that attendees will prepare proposals to submit once the actual Broad Agency Announcement solicitation goes out...

[T]he problem is that AI just isn't very smart. It can do high school-level math but not high-level math. [One slide from DARPA program manager Patrick Shafto noted that OpenAI o1 "continues to abjectly fail at basic math despite claims of reasoning capabilities."] Nonetheless, expMath's goal is to make AI models capable of:

- auto decomposition — automatically decompose natural language statements into reusable natural language lemmas (a proven statement used to prove other statements); and
auto(in)formalization — translate the natural language lemma into a formal proof and then translate the proof back to natural language.

"How must faster with technology advance with AI agents solving new mathematical proofs?" asks former DARPA research scientist Robin Rowe (also long-time Slashdot reader robinsrowe): DARPA says that "The goal of Exponentiating Mathematics is to radically accelerate the rate of progress in pure mathematics by developing an AI co-author capable of proposing and proving useful abstractions."
Rowe is cited in the article as the founder/CEO of an AI research institute named "Fountain Adobe". (He tells The Register that "It's an indication of DARPA's concern about how tough this may be that it's a three-year program. That's not normal for DARPA.") Rowe is optimistic. "I think we're going to kill it, honestly. I think it's not going to take three years. But I think it might take three years to do it with LLMs. So then the question becomes, how radical is everybody willing to be?"
"We will robustly engage with the math and AI communities toward fundamentally reshaping the practice of mathematics by mathematicians," explains the project's home page. They've already uploaded an hour-long video of their Proposers Day event.

"It's very unclear that current AI systems can succeed at this task..." program manager Shafto says in a short video introducing the project. But... "There's a lot of enthusiasm in the math community for the possibility of changes in the way mathematics is practiced. It opens up fundamentally new things for mathematicians. But of course, they're not AI researchers. One of the motivations for this program is to bring together two different communities — the people who are working on AI for mathematics, and the people who are doing mathematics — so that we're solving the same problem.

At its core, it's a very hard and rather technical problem. And this is DARPA's bread-and-butter, is to sort of try to change the world. And I think this has the potential to do that.

Education

Canadian University Cancels Coding Competition Over Suspected AI Cheating (uwaterloo.ca) 40

The university blamed it on "the significant number of students" who violated their coding competition's rules. Long-time Slashdot reader theodp quotes this report from The Logic: Finding that many students violated rules and submitted code not written by themselves, the University of Waterloo's Centre for Computing and Math decided not to release results from its annual Canadian Computing Competition (CCC), which many students rely on to bolster their chances of being accepted into Waterloo's prestigious computing and engineering programs, or land a spot on teams to represent Canada in international competitions.

"It is clear that many students submitted code that they did not write themselves, relying instead on forbidden external help," the CCC co-chairs explained in a statement. "As such, the reliability of 'ranking' students would neither be equitable, fair, or accurate."

"It is disappointing that the students who violated the CCC Rules will impact those students who are deserving of recognition," the univeresity said in its statement. They added that they are "considering possible ways to address this problem for future contests."
AI

Microsoft Researchers Develop Hyper-Efficient AI Model That Can Run On CPUs 59

Microsoft has introduced BitNet b1.58 2B4T, the largest-scale 1-bit AI model to date with 2 billion parameters and the ability to run efficiently on CPUs. It's openly available under an MIT license. TechCrunch reports: The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, "parameters" being largely synonymous with "weights." Trained on a dataset of 4 trillion tokens -- equivalent to about 33 million books, by one estimate -- BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn't sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers' testing, the model surpasses Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills). Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size -- in some cases, twice the speed -- while using a fraction of the memory.

There is a catch, however. Achieving that performance requires using Microsoft's custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.
Bitcoin

Canadian Math Prodigy Allegedly Stole $65 Million In Crypto (theglobeandmail.com) 85

A Canadian math prodigy is accused of stealing over $65 million through complex exploits on decentralized finance platforms and is currently a fugitive from U.S. authorities. Despite facing criminal charges for fraud and money laundering, he has evaded capture by moving internationally, embracing the controversial "Code is Law" philosophy, and maintaining that his actions were legal under the platforms' open-source rules. The Globe and Mail reports: Andean Medjedovic was 18 years old when he made a decision that would irrevocably alter the course of his life. In the fall of 2021, shortly after completing a master's degree at the University of Waterloo, the math prodigy and cryptocurrency trader from Hamilton had conducted a complex series of transactions designed to exploit a vulnerability in the code of a decentralized finance platform. The maneuver had allegedly allowed him to siphon approximately $16.5-million in digital tokens out of two liquidity pools operated by the platform, Indexed Finance, according to a U.S. court document.

Indexed Finance's leaders traced the attack back to Mr. Medjedovic, and made him an offer: Return 90 per cent of the funds, keep the rest as a so-called "bug bounty" -- a reward for having identified an error in the code -- and all would be forgiven. Mr. Medjedovic would then be free to launch his career as a white hat, or ethical, hacker. Mr. Medjedovic didn't take the deal. His social media posts hinted, without overtly stating, that he believed that because he had operated within the confines of the code, he was entitled to the funds -- a controversial philosophy in the world of decentralized finance known as "Code is Law." But instead of testing that argument in court, Mr. Medjedovic went into hiding. By the time authorities arrived on a quiet residential street in Hamilton to search his parents' townhouse less than two months later, Mr. Medjedovic had moved out, taking his electronic devices with him.

Then, roughly two years later, he struck again, netting an even larger sum -- approximately $48.4-million -- by conducting a similar exploit on another decentralized finance platform, U.S. authorities allege. Mr. Medjedovic, now 22, faces five criminal charges -- including wire fraud, attempted extortion and money laundering -- according to a U.S. federal court document that was unsealed earlier this year. If convicted, he could be facing decades in prison. First, authorities will have to find him.

Programming

You Should Still Learn To Code, Says GitHub CEO (businessinsider.com) 45

You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."

Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."

AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.

Transportation

An Electric Racecar Drives Upside Down (jalopnik.com) 57

Formula One cars, the world's fastest racecars, need to grip the track for speed and safety on the curves — leading engineers to design cars that create downforce. And racing fans are even told that "a Formula 1 racecar generates enough downforce above a certain speed that it could theoretically drive upside down," writes the automotive site Jalopnik.

"McMurtry Automotive turned this theory into reality after having its Spéirling hypercar complete the impressive feat..." Admittedly, the Spéirling's success can be solely attributed to its proprietary 'Downforce-on-Demand' fan system that produces 4,400 pounds of downforce at the push of a button... For those looking to do the math, Spéirling weighs 2,200 pounds. With the stopped car's fan whirling at 23,000 rpm, the rig was rotated to invert the road deck... Then, the hypercar rolled forward a few feet before stopping while inverted. The rig rotated the road deck back down, and the Spéirling drove off like nothing happened.

The McMurtry Spéirling, as a 1,000-hp twin-motor electric hypercar, didn't have to clear the other hurdles that an F1 car would have clear to drive upside down. Dry-sump combustion engines aren't designed to run inverted and would eventually fail catastrophically. Oil wouldn't be able to cycle through and keep the engine lubricated.

The car is "an electric monster purpose-built to destroy track records," Jalopnik wrote in 2022 when the car shaved more than two seconds off a long-standing record. The "Downforce-on-Demand" feature gives it tremendous acceleration — in nine seconds it can go from 0 to 186.4 mph (300 km/h), according to Jalopnik.

"McMurtry is working towards finalizing a production version of its hypercar, called the Spéirling PURE. Only 100 will be produced."
Math

How a Secretive Gambler Called 'The Joker' Beat the Texas Lottery (msn.com) 113

"Can you help me take down the Texas lottery?"

That's what a London banker-turned-bookmaker asked "acquaintances" in 2023, reports the Wall Street Journal. The plan was to buy "nearly every possible number in a coming drawing" — purchasing $1 tickets for 25.8 million possible combinations, since "The jackpot was heading to $95 million. If nobody else also picked the winning numbers, the profit would be nearly $60 million." Marantelli flew to the U.S. with a few trusted lieutenants. They set up shop in a defunct dentist's office, a warehouse and two other spots in Texas. The crew worked out a way to get official ticket-printing terminals. Trucks hauled in dozens of them and reams of paper... [Then Texas announced no winner in an earlier lottery, rolling its jackpot into another drawing three days later.] The machines — manned by a disparate bunch of associates and some of their children — screeched away nearly around the clock, spitting out 100 or more tickets every second. Texas politicians later likened the operation to a sweatshop.

Trying to pull off the gambit required deep pockets and a knack for staying under the radar — both hallmarks of the secretive Tasmanian gambler who bankrolled the operation. Born Zeljko Ranogajec, he was nicknamed "the Joker" for his ability to pull off capers at far-flung casinos and racetracks. Adding to his mystique, he changed his name to John Wilson several decades ago. Among some associates, though, he still goes by Zeljko, or Z. Over the years, Ranogajec and his partners have won hundreds of millions of dollars by applying Wall Street-style analytics to betting opportunities around the world. Like card counters at a blackjack table, they use data and math to hunt for situations ripe for flipping the house edge in their favor. Then they throw piles of money at it, betting an estimated $10 billion annually.

The Texas lottery play, one of their most ambitious operations ever, paid off spectacularly with a $57.8 million jackpot win. That, in turn, spilled their activities into public view and sparked a Texas-size uproar about whether other lotto players — and indeed the entire state — had been hoodwinked. Early this month, the state's lieutenant governor, Dan Patrick, called the crew's win "the biggest theft from the people of Texas in the history of Texas." In response to written questions addressed to Marantelli and Ranogajec, Glenn Gelband, a New Jersey lawyer who represents the limited partnership that claimed the Texas prize, said "all applicable laws, rules and regulations were followed...."

Lottery officials and state lawmakers have taken steps to prevent a repeat.

The article also looks at a group of Princeton University graduates calling themselves Black Swan Capital that's "won millions in recent years" by targetting state lottery drawings with unusually favorable odds.

"State lottery directors say they are seeing more organized efforts to buy lottery tickets in bulk," according to the article, "but that the groups are largely operating legally and transparently..."
Education

Microsoft, Amazon Execs Call Out Washington's Low-Performing 9-Year-Olds In Tax Pushback (geekwire.com) 155

Longtime Slashdot reader theodp writes: A coalition of Washington state business leaders -- which includes Microsoft President Brad Smith and Amazon Chief Legal Officer David Zapolsky -- released a letter Wednesday urging state lawmakers to reconsider recently proposed tax and budget measures. "I actually think it's an almost unprecedented outpouring of support from across the business community," said Microsoft's Smith in an interview. In their letter, which reads in part like it could have been penned by a GenAI Marie Antoinette, the WA business leaders question whether any more spending is warranted given how poorly Washington's 4th and 8th graders compare to children in the rest of the nation on test scores. The letter also laments the increase in WA's homeless population as it celebrates WA Governor Bob Ferguson's announcement that he would not sign a proposed wealth tax.

From the letter: "We have long partnered with you in many areas, including education funding. Despite more than doubling K-12 spending and increasing teacher salaries to some of the highest rates in the nation, 4th and 8th grade assessment scores in reading and math are among the worst in the country. Similarly, we have collaborated with you to address housing and homelessness. Despite historic investments in affordable housing and homelessness prevention since 2013, Washington's homeless population has grown by 71 percent, making it the third largest in the nation after California and New York, according to HUD. These outcomes beg the question of whether more investment is needed or whether we need different policies instead."

Back in 2010, Smith teamed with then-Microsoft CEO Steve Ballmer and then-Amazon CEO Jeff Bezos to fund an effort to defeat an initiative for a WA state income that was pushed for by Bill Gates Sr. In 2023, Bezos moved out of WA state before being subjected to a 7% tax on gains of more than $250,000 from the sale of stocks and bonds, a move that reportedly saved him $1.2 billion in WA taxes on his 2024 Amazon stock sales.

IT

Why Watts Should Replace mAh as Essential Spec for Mobile Devices (theverge.com) 193

Tech manufacturers continue misleading consumers with impressive-sounding but less useful specs like milliamp-hours and megahertz, while hiding the one measurement that matters most: watts. The Verge argues that the watt provides the clearest picture of a device's true capabilities by showing how much power courses through chips and how quickly batteries drain. With elementary math, consumers could easily calculate battery life by dividing watt-hours by power consumption. The Verge: The Steam Deck gaming handheld is my go-to example of how handy watts can be. With a 15-watt maximum processor wattage and up to 9 watts of overhead for other components, a strenuous game drains its 49Wh battery in roughly two hours flat. My eight-year-old can do that math: 15 plus 9 is 24, and 24 times 2 is 48. You can fit two hour-long 24-watt sessions into 48Wh, and because you have 49Wh, you're almost sure to get it.

With the least strenuous games, I'll sometimes see my Steam Deck draining the battery at a speed of just 6 watts -- which means I can get eight hours of gameplay because 6 watts times 8 hours is 48Wh, with 1Wh remaining in the 49Wh battery.
Unlike megahertz, wattage also indicates sustained performance capability, revealing whether a processor can maintain high speeds or will throttle due to thermal constraints. Watts is also already familiar to consumers through light bulbs and power bills, but manufacturers persist with less transparent metrics that make direct comparisons difficult.
Science

A New Image File Format Efficiently Stores Invisible Light Data (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Imagine working with special cameras that capture light your eyes can't even see -- ultraviolet rays that cause sunburn, infrared heat signatures that reveal hidden writing, or specific wavelengths that plants use for photosynthesis. Or perhaps using a special camera designed to distinguish the subtle visible differences that make paint colors appear just right under specific lighting. Scientists and engineers do this every day, and they're drowning in the resulting data. A new compression format called Spectral JPEG XL might finally solve this growing problem in scientific visualization and computer graphics. Researchers Alban Fichet and Christoph Peters of Intel Corporation detailed the format in a recent paper published in the Journal of Computer Graphics Techniques (JCGT). It tackles a serious bottleneck for industries working with these specialized images. These spectral files can contain 30, 100, or more data points per pixel, causing file sizes to balloon into multi-gigabyte territory -- making them unwieldy to store and analyze.

[...] The current standard format for storing this kind of data, OpenEXR, wasn't designed with these massive spectral requirements in mind. Even with built-in lossless compression methods like ZIP, the files remain unwieldy for practical work as these methods struggle with the large number of spectral channels. Spectral JPEG XL utilizes a technique used with human-visible images, a math trick called a discrete cosine transform (DCT), to make these massive files smaller. Instead of storing the exact light intensity at every single wavelength (which creates huge files), it transforms this information into a different form. [...]

According to the researchers, the massive file sizes of spectral images have reportedly been a real barrier to adoption in industries that would benefit from their accuracy. Smaller files mean faster transfer times, reduced storage costs, and the ability to work with these images more interactively without specialized hardware. The results reported by the researchers seem impressive -- with their technique, spectral image files shrink by 10 to 60 times compared to standard OpenEXR lossless compression, bringing them down to sizes comparable to regular high-quality photos. They also preserve key OpenEXR features like metadata and high dynamic range support.
The report notes that broader adoption "hinges on the continued development and refinement of the software tools that handle JPEG XL encoding and decoding."

Some scientific applications may also see JPEG XL's lossy approach as a drawback. "Some researchers working with spectral data might readily accept the trade-off for the practical benefits of smaller files and faster processing," reports Ars. "Others handling particularly sensitive measurements might need to seek alternative methods of storage."

Slashdot Top Deals