×
Microsoft

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

Math

73-Year-Old Clifford Stoll Is Now Selling Klein Bottles (berkeley.edu) 46

O'Reilly's "Tech Trends" newsletter included an interesting item this month: Want your own Klein Bottle? Made by Cliff Stoll, author of the cybersecurity classic The Cuckoo's Egg, who will autograph your bottle for you (and may include other surprises).
First described in 1882 by the mathematician Felix Klein, a Klein bottle (like a Mobius strip) has a one-side surface. ("Need a zero-volume bottle...?" asks Stoll's web site. "Want the ultimate in non-orientability...? A mathematician's delight, handcrafted in glass.")

But how the legendary cyberbreach detective started the company is explained in this 2016 article from a U.C. Berkeley alumni magazine. Its headline? "How a Berkeley Eccentric Beat the Russians — and Then Made Useless, Wondrous Objects." The reward for his cloak-and-dagger wizardry? A certificate of appreciation from the CIA, which is stashed somewhere in his attic... Stoll published a best-selling book, The Cuckoo's Egg, about his investigation. PBS followed it with a NOVA episode entitled "The KGB, the Computer, and Me," a docudrama starring Stoll playing himself and stepping through the "fourth wall" to double as narrator. Stoll had stepped through another wall, as well, into the numinous realm of fame, as the burgeoning tech world went wild with adulation... He was more famous than he ever could have dreamed, and he hated it. "After a few months, you realize how thin fame is, and how shallow. I'm not a software jockey; I'm an astronomer. But all people cared about was my computing."

Stoll's disenchantment also arose from what he perceived as the false religion of the Internet... Stoll articulated his disenchantment in his next book, Silicon Snake Oil, published in 1995, which urged readers to get out from behind their computer screens and get a life. "I was asking what I thought were reasonable questions: Is the electronic classroom an improvement? Does a computer help a student learn? Yes, but what it teaches you is to go to the computer whenever you have a question, rather than relying on yourself. Suppose I was an evil person and wanted to eliminate the curiosity of children. Give the kid a diet of Google, and pretty soon the child learns that every question he has is answered instantly. The coolest thing about being human is to learn, but you don't learn things by looking it up; you learn by figuring it out." It was not a popular message in the rise of the dot-com era, as Stoll soon learned...

Being a Voice in the Wilderness doesn't pay well, however, and by this time Stoll had taken his own advice and gotten a life; namely, marrying and having two children. So he looked around for a way to make some money. That ushered in his third — and current — career as President and Chief Bottle Washer of the aforementioned Acme Klein Bottle company... At first, Stoll had a hard time finding someone to make Klein bottles. He tried a bong peddler on Telegraph Avenue, but the guy took Cliff's money and disappeared. "I realized that the trouble with bong makers is that they're also bong users."

Then in 1994, two friends of his, Tom Adams and George Chittenden, opened a shop in West Berkeley that made glassware for science labs. "They needed help with their computer program and wanted to pay me," Stoll recalls. "I said, 'Nah, let's make Klein bottles instead.' And that's how Acme Klein Bottles was born."

UPDATE: Turns out Stoll is also a long-time Slashdot reader, and shared comments this weekend on everything from watching the eclipse to his VIP parking pass for CIA headquarters and "this CIA guy's rubber-stamp collection."

"I am honored by the attention and kindness of fellow nerds and online friends," Stoll added Saturday. "When I first started on that chase in 1986, I had no idea wrhere it would lead me... To all my friends: May you burdens be light and your purpose high. Stay curious!"
AI

OpenAI Makes ChatGPT 'More Direct, Less Verbose' (techcrunch.com) 36

Kyle Wiggers reports via TechCrunch: OpenAI announced today that premium ChatGPT users -- customers paying for ChatGPT Plus, Team or Enterprise -- can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience. This new model ("gpt-4-turbo-2024-04-09") brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off. "When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language," OpenAI writes in a post on X.
Education

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
Intel

Intel Discloses $7 Billion Operating Loss For Chip-Making Unit (reuters.com) 82

Intel on Tuesday disclosed $7 billion in operating losses for its foundry business in 2023, "a steeper loss than the $5.2 billion in operating losses the year before," reports Reuters. "The unit had revenue of $18.9 billion for 2023, down 31% from $27.49 billion the year before." From the report: Intel shares were down 4.3% after the documents were filed with the U.S. Securities and Exchange Commission (SEC). During a presentation for investors, Chief Executive Pat Gelsinger said that 2024 would be the year of worst operating losses for the company's chipmaking business and that it expects to break even on an operating basis by about 2027. Gelsinger said the foundry business was weighed down by bad decisions, including one years ago against using extreme ultraviolet (EUV) machines from Dutch firm ASML. While those machines can cost more than $150 million, they are more cost-effective than earlier chip making tools.

Partially as a result of the missteps, Intel has outsourced about 30% of the total number of wafers to external contract manufacturers such as TSMC, Gelsinger said. It aims to bring that number down to roughly 20%. Intel has now switched over to using EUV tools, which will cover more and more production needs as older machines are phased out. "In the post EUV era, we see that we're very competitive now on price, performance (and) back to leadership," Gelsinger said. "And in the pre-EUV era we carried a lot of costs and (were) uncompetitive."
Editor's note: This story has been corrected to change the 2022 revenue figure for Intel Foundry to $27.49 billion, as reflected in the source article. We apologize for the math error.
AI

Databricks Claims Its Open Source Foundational LLM Outsmarts GPT-3.5 (theregister.com) 17

Lindsay Clark reports via The Register: Analytics platform Databricks has launched an open source foundational large language model, hoping enterprises will opt to use its tools to jump on the LLM bandwagon. The biz, founded around Apache Spark, published a slew of benchmarks claiming its general-purpose LLM -- dubbed DBRX -- beat open source rivals on language understanding, programming, and math. The developer also claimed it beat OpenAI's proprietary GPT-3.5 across the same measures.

DBRX was developed by Mosaic AI, which Databricks acquired for $1.3 billion, and trained on Nvidia DGX Cloud. Databricks claims it optimized DBRX for efficiency with what it calls a mixture-of-experts (MoE) architecture â" where multiple expert networks or learners divide up a problem. Databricks explained that the model possesses 132 billion parameters, but only 36 billion are active on any one input. Joel Minnick, Databricks marketing vice president, told The Register: "That is a big reason why the model is able to run as efficiently as it does, but also runs blazingly fast. In practical terms, if you use any kind of major chatbots that are out there today, you're probably used to waiting and watching the answer get generated. With DBRX it is near instantaneous."

But the performance of the model itself is not the point for Databricks. The biz is, after all, making DBRX available for free on GitHub and Hugging Face. Databricks is hoping customers use the model as the basis for their own LLMs. If that happens it might improve customer chatbots or internal question answering, while also showing how DBRX was built using Databricks's proprietary tools. Databricks put together the dataset from which DBRX was developed using Apache Spark and Databricks notebooks for data processing, Unity Catalog for data management and governance, and MLflow for experiment tracking.

Math

Pythagoras Was Wrong: There Are No Universal Musical Harmonies, Study Finds (cam.ac.uk) 73

An anonymous reader shares a report: According to the Ancient Greek philosopher Pythagoras, 'consonance' -- a pleasant-sounding combination of notes -- is produced by special relationships between simple numbers such as 3 and 4. More recently, scholars have tried to find psychological explanations, but these 'integer ratios' are still credited with making a chord sound beautiful, and deviation from them is thought to make music 'dissonant,' unpleasant sounding.

But researchers from the University of Cambridge, Princeton and the Max Planck Institute for Empirical Aesthetics, have now discovered two key ways in which Pythagoras was wrong. Their study, published in Nature Communications, shows that in normal listening contexts, we do not actually prefer chords to be perfectly in these mathematical ratios. "We prefer slight amounts of deviation. We like a little imperfection because this gives life to the sounds, and that is attractive to us," said co-author, Dr Peter Harrison, from Cambridge's Faculty of Music and Director of its Centre for Music and Science.

The researchers also found that the role played by these mathematical relationships disappears when you consider certain musical instruments that are less familiar to Western musicians, audiences and scholars. These instruments tend to be bells, gongs, types of xylophones and other kinds of pitched percussion instruments. In particular, they studied the 'bonang,' an instrument from the Javanese gamelan built from a collection of small gongs.

AI

Why Are So Many AI Chatbots 'Dumb as Rocks'? (msn.com) 73

Amazon announced a new AI-powered chatbot last month — still under development — "to help you figure out what to buy," writes the Washington Post. Their conclusion? "[T]he chatbot wasn't a disaster. But I also found it mostly useless..."

"The experience encapsulated my exasperation with new types of AI sprouting in seemingly every technology you use. If these chatbots are supposed to be magical, why are so many of them dumb as rocks?" I thought the shopping bot was at best a slight upgrade on searching Amazon, Google or news articles for product recommendations... Amazon's chatbot doesn't deliver on the promise of finding the best product for your needs or getting you started on a new hobby.

In one of my tests, I asked what I needed to start composting at home. Depending on how I phrased the question, the Amazon bot several times offered basic suggestions that I could find in a how-to article and didn't recommend specific products... When I clicked the suggestions the bot offered for a kitchen compost bin, I was dumped into a zillion options for countertop compost products. Not helpful... Still, when the Amazon bot responded to my questions, I usually couldn't tell why the suggested products were considered the right ones for me. Or, I didn't feel I could trust the chatbot's recommendations.

I asked a few similar questions about the best cycling gloves to keep my hands warm in winter. In one search, a pair that the bot recommended were short-fingered cycling gloves intended for warm weather. In another search, the bot recommended a pair that the manufacturer indicated was for cool temperatures, not frigid winter, or to wear as a layer under warmer gloves... I did find the Amazon chatbot helpful for specific questions about a product, such as whether a particular watch was waterproof or the battery life of a wireless keyboard.

But there's a larger question about whether technology can truly handle this human-interfacing task. "I have also found that other AI chatbots, including those from ChatGPT, Microsoft and Google, are at best hit-or-miss with shopping-related questions..." These AI technologies have potentially profound applications and are rapidly improving. Some people are making productive use of AI chatbots today. (I mostly found helpful Amazon's relatively new AI-generated summaries of customer product reviews.)

But many of these chatbots require you to know exactly how to speak to them, are useless for factual information, constantly make up stuff and in many cases aren't much of an improvement on existing technologies like an app, news articles, Google or Wikipedia. How many times do you need to scream at a wrong math answer from a chatbot, botch your taxes with a TurboTax AI, feel disappointed at a ChatGPT answer or grow bored with a pointless Tom Brady chatbot before we say: What is all this AI junk for...?

"When so many AI chatbots overpromise and underdeliver, it's a tax on your time, your attention and potentially your money," the article concludes.

"I just can't with all these AI junk bots that demand a lot of us and give so little in return."
Math

Pi Calculated to 105 Trillion Digits. (Stored on 1 Petabyte of SSDs) (solidigm.com) 95

Pi was calculated to 100 trillion decimal places in 2022 by a Google team lead by cloud developer advocate Emma Haruka Iwao.

But 2024's "pi day" saw a new announcement... After successfully breaking the speed record for calculating pi to 100 trillion digits last year, the team at StorageReview has taken it up a notch, revealing all the numbers of Pi up to 105 trillion digits! Spoiler: the 105 trillionth digit of Pi is 6!

Owner and Editor-in-Chief Brian Beeler led the team that used 36 Solidigm SSDs (nearly a petabyte) for their unprecedented capacity and reliability required to store the calculated digits of Pi. Although there is no practical application for this many digits, the exercise underscores the astounding capabilities of modern hardware and an achievement in computational and storage technology...

For an undertaking of this size, which took 75 days, the role of storage cannot be understated. "For the Pi computation, we're entirely restricted by storage, says Beeler. "Faster CPUs will help accelerate the math, but the limiting factor to many new world records is the amount of local storage in the box. For this run, we're again leveraging Solidigm D5-P5316 30.72TB SSDs to help us get a little over 1P flash in the system.

"These SSDs are the only reason we could break through the prior records and hit 105 trillion Pi digits."

"Leveraging a combination of open-source and proprietary software, the team at StorageReview optimized the algorithmic process to fully exploit the hardware's capabilities, reducing computational time and enhancing efficiency," Beeler says in the announcement.

There's a video on YouTube where the team discusses their effort.
AI

'AI Prompt Engineering Is Dead' 68

The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products. But new research hints that the AI may be better at prompt engineering than humans, indicating many of these jobs could be short-lived as the technology evolves and automates the role. IEEE Spectrum: Battle and Gollapudi decided to systematically test [PDF] how different prompt engineering strategies impact an LLM's ability to solve grade school math questions. They tested three different open source language models with 60 different prompt combinations each. What they found was a surprising lack of consistency. Even chain-of-thought prompting sometimes helped and other times hurt performance. "The only real trend may be no trend," they write. "What's best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand."

There is an alternative to the trial-and-error style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.
Math

Algebra To Return To San Francisco Middle Schools This Fall (axios.com) 97

After a 6-1 vote by the district board, San Francisco middle schools will teach Algebra I again this fall. Axios reports: Roughly a third of SFUSD middle schools this fall will begin offering the course to eighth graders at about a third of its 13 middle schools as well as six of its K-8 schools, the San Francisco Chronicle reports. Students at other campuses will have access to the course via online classes or summer school while their schools take three years to make the transition. Those eighth graders will otherwise have to wait until high school to take the course.

District officials plan to evaluate the best way to enroll students throughout the district in a pilot at the first schools this fall. The first approach would be to enroll all eighth graders. The second would prioritize students' interest or readiness. The third would give students the option of taking Algebra I on top of current eighth-grade math curricula.

The 6-1 vote by the San Francisco Unified School District board Tuesday followed a decadelong battle over eighth graders' access to higher-level math courses and a larger debate over academic opportunity and equity in math performance. SFUSD previously taught eighth-grade algebra. But in 2014, the board voted to wait until high school to try to address racial gaps that had emerged as some students moved quicker to advanced math classes. Studies have shown that inequities including socioeconomic status, language differences and implicit bias often impede Black and Latino students' educational pursuits and result in lower rates of enrollment in higher-level classes. Yes, but: Stanford researchers found last year that large racial and ethnic gaps in advanced math enrollment persisted even after the policy change.

Music

Spotify's Layoffs Put an End To a Musical Encyclopedia (techcrunch.com) 21

An anonymous reader quotes a report from TechCrunch: On a brutal December day, 17% of Spotify employees found out they had been laid off in the company's third round of job cuts last year. Not long after, music fans around the world realized that the cult-favorite website Every Noise at Once (EveryNoise), an encyclopedic goldmine for music discovery, had stopped working. These two events were not disconnected. Spotify data alchemist Glenn McDonald, who created EveryNoise, was one of the 1,500 employees who was let go that day, but his layoff had wider-reaching implications; now that McDonald doesn't have access to internal Spotify data, he can no longer maintain EveryNoise, which became a pivotal resource for the most obsessive music fans to track new releases and learn more about the sounds they love.

"The project is to understand the communities of listening that exist in the world, figure out what they're called, what artists are in them and what their audiences are," McDonald told TechCrunch. "The goal is to use math where you can to find real things that exist in listening patterns. So I think about it as trying to help global music self-organize." If you work at a big tech company and get laid off, you probably won't expect the company's customers to write nine pages of complaints on a community forum, telling your former employer how badly they messed up by laying you off. Nor would you expect an outpouring of Reddit threads and tweets questioning how you could possibly get the axe. But that's how fans reacted when they heard McDonald's fate.

Encryption

Cryptography Guru Martin Hellman Urges International Cooperation on AI, Security (infoworld.com) 18

Martin Hellman "achieved legendary status as co-inventor of the Diffie-Hellman public key exchange algorithm, a breakthrough in software and computer cryptography," notes a new interview in InfoWorld.

Nine years after winning the Turing award, the 78-year-old cryptologist shared his perspective on some other issues: What do you think about the state of digital spying today?

Hellman: There's a need for greater international cooperation. How can we have true cyber security when nations are planning — and implementing — cyber attacks on one another? How can we ensure that AI is used only for good when nations are building it into their weapons systems? Then, there's the grandaddy of all technological threats, nuclear weapons. If we keep fighting wars, it's only a matter of time before one blows up.

The highly unacceptable level of nuclear risk highlights the need to look at the choices we make around critical decisions, including cyber security. We have to take into consideration all participants' needs for our strategies to be effective....

Your battle with the government to make private communication available to the general public in the digital age has the status of folklore. But, in your recent book (co-authored with your wife Dorothie [and freely available as a PDF]), you describe a meeting of minds with Admiral Bobby Ray Inman, former head of the NSA. Until I read your book, I saw the National Security Agency as bad and Diffie-Hellman as good, plain and simple. You describe how you came to see the NSA and its people as sincere actors rather than as a cynical cabal bent on repression. What changed your perspective?

Hellman: This is a great, real-life example of how taking a holistic view in a conflict, instead of just a one-sided one, resolved an apparently intractable impasse. Those insights were part of a major change in my approach to life. As we say in our book, "Get curious, not furious." These ideas are effective not just in highly visible conflicts like ours with the NSA, but in every aspect of life.

Hellman also had an interesting answer when asked if math, game theory, and software development teach any lessons applicable to issues like nuclear non-proliferation or national defense.

"The main thing to learn is that the narrative we (and other nations) tell ourselves is overly simplified and tends to make us look good and our adversaries bad."
Math

Mathematicians Finally Solved Feynman's 'Reverse Sprinkler' Problem (arstechnica.com) 58

Jennifer Ouellette reports via Ars Technica: A typical lawn sprinkler features various nozzles arranged at angles on a rotating wheel; when water is pumped in, they release jets that cause the wheel to rotate. But what would happen if the water were sucked into the sprinkler instead? In which direction would the wheel turn then, or would it even turn at all? That's the essence of the "reverse sprinkler" problem that physicists like Richard Feynman, among others, have grappled with since the 1940s. Now, applied mathematicians at New York University think they've cracked the conundrum, per a recent paper published in the journal Physical Review Letters -- and the answer challenges conventional wisdom on the matter. "Our study solves the problem by combining precision lab experiments with mathematical modeling that explains how a reverse sprinkler operates," said co-author Leif Ristroph of NYU's Courant Institute. "We found that the reverse sprinkler spins in the 'reverse' or opposite direction when taking in water as it does when ejecting it, and the cause is subtle and surprising." [...]

Enter Leif Ristroph and colleagues, who built their own custom sprinkler that incorporated ultra-low-friction rotary bearings so their device could spin freely. They immersed their sprinkler in water and used a special apparatus to either pump water in or pull it out at carefully controlled flow rates. Particularly key to the experiment was the fact that their custom sprinkler let the team observe and measure how water flowed inside, outside, and through the device. Adding dyes and microparticles to the water and illuminating them with lasers helped capture the flows on high-speed video. They ran their experiments for several hours at a time, the better to precisely map the fluid-flow patterns.

Ristroph et al. found that the reverse sprinkler rotates a good 50 times slower than a regular sprinkler, but it operates along similar mechanisms, which is surprising. "The regular or 'forward' sprinkler is similar to a rocket, since it propels itself by shooting out jets," said Ristroph. "But the reverse sprinkler is mysterious since the water being sucked in doesn't look at all like jets. We discovered that the secret is hidden inside the sprinkler, where there are indeed jets that explain the observed motions." A reverse sprinkler acts like an "inside-out rocket," per Ristroph, and although the internal jets collide, they don't do so head-on. "The jets aren't directed exactly at the center because of distortion of the flow as it passes through the curved arm," Ball wrote. "As the water flows around the bends in the arms, it is slung outward by centrifugal force, which gives rise to asymmetric flow profiles." It's admittedly a subtle effect, but their experimentally observed flow patterns are in excellent agreement with the group's mathematical models.

Power

How You Can Charge Your EV If You Don't Own a House (yahoo.com) 186

"According to one study, homeowners are three times more likely than renters to own an electric vehicle," writes the Washington Post. But others still have options: Drivers who park on the street have found novel ways to charge their vehicles, using extension cords running over the sidewalk or even into the branches of a nearby tree... [S]ome municipalities explicitly allow over-the-sidewalk charging as part of a broader strategy to cut transportation emissions... In some areas, homeowners can also hire an electrician to run power under the sidewalk to a curbside charging port. But homeowners should check local rules and permitting requirements for curbside charging. In some highly EV-friendly cities, local governments will cover the costs. In Seattle, a pilot program is installing faster curbside charging to residents who opt in to the program...

If home charging simply isn't an option, some drivers rely on public charging — either using workplace chargers or charging occasionally on DC fast chargers, which can bring an EV battery from 0 to 80 percent in around 20 minutes. The problem is that public charging is more expensive than charging at home — although in most places, still less expensive than gas... For drivers who have access to Tesla superchargers, public charging might still be a solid option — but for non-Tesla drivers, it's still a challenge. Many fast chargers can be broken for days or weeks on end, or can be crowded with other drivers. The popular charging app PlugShare can help EV owners find available charging ports, but relying on public fast charging can quickly become a pain for drivers used to quickly filling up on gas. In those situations, a plug-in hybrid or regular hybrid car might be a better option.

And beyond that, "experts say that there are a key few steps that renters or condo owners can take to access charging," according to the article: The first is looking up local "right-to-charge" laws — regulations that require homeowners' associations or landlords to allow residents to install Level 1 or Level 2 charging. Ten states have "right-to-charge" laws on the books. In California and Colorado, for example, renters or homeowners have the right to install charging at their private parking space or, in some cases, in a public area at their apartment building. Other states, including Florida, Hawaii and New Jersey, have similar but limited laws. Residents can also reach out to landlords or property owners directly and make the case for installing charging infrastructure. All of this "puts a fair amount of onus on the driver," said Ben Prochazka, the executive director of the Electrification Coalition. But, he added, many EV advocacy groups are working on changing building codes in cities and states so that all multifamily homes with parking have to be "EV-ready."
Ingrid Malmgren, policy director at the EV advocacy group Plug In America, tells the newspaper that "communities all over the country are coming up with creative solutions. And it's just going to get easier and easier."
EU

Shameless Insult, Malicious Compliance, Junk Fees, Extortion Regime: Industry Reacts To Apple's Proposed Changes Over Digital Markets Act 255

In response to new EU regulations, Apple on Thursday outlined plans to allow iOS developers to distribute apps outside the App Store starting in March, though developers must still submit apps for Apple's review and pay commissions. Now critics say the changes don't go far enough and Apple retains too much control.

Epic Games CEO Tim Sweeney: They are forcing developers to choose between App Store exclusivity and the store terms, which will be illegal under DMA (Digital Markets Act), or accept a new also-illegal anticompetitive scheme rife with new Junk Fees on downloads and new Apple taxes on payments they don't process. 37signals's David Heinemeier Hansson, who is also the creator of Ruby on Rails: Let's start with the extortion regime that'll befell any large developer who might be tempted to try hosting their app in one of these new alternative app stores that the EU forced Apple to allow. And let's take Meta as a good example. Their Instagram app alone is used by over 300 million people in Europe. Let's just say for easy math there's 250 million of those in the EU. In order to distribute Instagram on, say, a new Microsoft iOS App Store, Meta would have to pay Apple $11,277,174 PER MONTH(!!!) as a "Core Technology Fee." That's $135 MILLION DOLLARS per year. Just for the privilege of putting Instagram into a competing store. No fee if they stay in Apple's App Store exclusively.

Holy shakedown, batman! That might be the most blatant extortion attempt ever committed to public policy by any technology company ever. And Meta has many successful apps! WhatsApp is even more popular in Europe than Instagram, so that's another $135M+/year. Then they gotta pay for the Facebook app too. There's the Messenger app. You add a hundred million here and a hundred million there, and suddenly you're talking about real money! Even for a big corporation like Meta, it would be an insane expense to offer all their apps in these new alternative app stores.

Which, of course, is the entire point. Apple doesn't want Meta, or anyone, to actually use these alternative app stores. They want everything to stay exactly as it is, so they can continue with the rake undisturbed. This poison pill is therefore explicitly designed to ensure that no second-party app store ever takes off. Without any of the big apps, there will be no draw, and there'll be no stores. All of the EU's efforts to create competition in the digital markets will be for nothing. And Apple gets to send a clear signal: If you interrupt our tool-booth operation, we'll make you regret it, and we'll make you pay. Don't resist, just let it be. Let's hope the EU doesn't just let it be.
Coalition of App Fairness, an industry body that represents over 70 firms including Tinder, Spotify, Proton, Tile, and News Media Europe: "Apple clearly has no intention to comply with the DMA. Apple is introducing new fees on direct downloads and payments they do nothing to process, which violates the law. This plan does not achieve the DMA's goal to increase competition and fairness in the digital market -- it is not fair, reasonable, nor non-discriminatory," said Rick VanMeter, Executive Director of the Coalition for App Fairness.

"Apple's proposal forces developers to choose between two anticompetitive and illegal options. Either stick with the terrible status quo or opt into a new convoluted set of terms that are bad for developers and consumers alike. This is yet another attempt to circumvent regulation, the likes of which we've seen in the United States, the Netherlands and South Korea. Apple's 'plan' is a shameless insult to the European Commission and the millions of European consumers they represent -- it must not stand and should be rejected by the Commission."
Math

How Much of the World Is It Possible to Model? 45

Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation.

The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks.

This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place.

Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.
Math

Google DeepMind's New AI System Can Solve Complex Geometry Problems (technologyreview.com) 10

An anonymous reader quotes a report from MIT Technology Review: Google DeepMind has created an AI system that can solve complex geometry problems. It's a significant step towards machines with more human-like reasoning skills, experts say. Geometry, and mathematics more broadly, have challenged AI researchers for some time. Compared with text-based AI models, there is significantly less training data for mathematics because it is symbol driven and domain specific, says Thang Wang, a coauthor of the research, which is published in Nature today. Solving mathematics problems requires logical reasoning, something that most current AI models aren't great at. This demand for reasoning is why mathematics serves as an important benchmark to gauge progress in AI intelligence, says Wang.

DeepMind's program, named AlphaGeometry, combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions. Language models excel at recognizing patterns and predicting subsequent steps in a process. However, their reasoning lacks the rigor required for mathematical problem-solving. The symbolic engine, on the other hand, is based purely on formal logic and strict rules, which allows it to guide the language model toward rational decisions. These two approaches, responsible for creative thinking and logical reasoning respectively, work together to solve difficult mathematical problems. This closely mimics how humans work through geometry problems, combining their existing understanding with explorative experimentation.

DeepMind says it tested AlphaGeometry on 30 geometry problems at the same level of difficulty found at the International Mathematical Olympiad, a competition for top high school mathematics students. It completed 25 within the time limit. The previous state-of-the-art system, developed by the Chinese mathematician Wen-Tsun Wu in 1978, completed only 10. "This is a really impressive result," says Floris van Doorn, a mathematics professor at the University of Bonn, who was not involved in the research. "I expected this to still be multiple years away." DeepMind says this system demonstrates AI's ability to reason and discover new mathematical knowledge. "This is another example that reinforces how AI can help us advance science and better understand the underlying processes that determine how the world works," said Quoc V. Le, a scientist at Google DeepMind and one of the authors of the research, at a press conference.

AI

Should Chatbots Teach Your Children? 94

"Sal Kahn, the founder and CEO of Khan Academy predicted last year that AI tutoring bots would soon revolutionize education," writes long-time Slashdot reader theodp: theodp writes: His vision of tutoring bots tapped into a decades-old Silicon Valley dream: automated teaching platforms that instantly customize lessons for each student. Proponents argue that developing such systems would help close achievement gaps in schools by delivering relevant, individualized instruction to children faster and more efficiently than human teachers ever could. But some education researchers say schools should be wary of the hype around AI-assisted instruction, warning that generative AI tools may turn out to have harmful or "degenerative" effects on student learning.
A ChatGPT-powered tutoring bot was tested last spring at the Khan Academy — and Bill Gates is enthusiastic about that bot and AI education in general (as well as the Khan Academy and AI-related school curriculums). From the original submission: Explaining his AI vision in November, Bill Gates wrote, "If a tutoring agent knows that a kid likes [Microsoft] Minecraft and Taylor Swift, it will use Minecraft to teach them about calculating the volume and area of shapes, and Taylor's lyrics to teach them about storytelling and rhyme schemes. The experience will be far richer—with graphics and sound, for example—and more personalized than today's text-based tutors."

The New York Times article notes that similar enthusiasm greeted automated teaching tools in the 1960s, but predictions that that the mechanical and electronic "teaching machines' — which were programmed to ask students questions on topics like spelling or math — would revolutionize education didn't pan out.

So, is this time different?
AI

OpenAI Launches New Store For Users To Share Custom Chatbots (bloomberg.com) 8

OpenAI has launched an online store where people can share customized versions of the company's popular ChatGPT chatbot, after initially delaying the rollout because of leadership upheaval last year. From a report: The new store, which rolled out Wednesday to paid ChatGPT users, will corral the chatbots that users create for a variety of tasks, for example a version of ChatGPT that can teach math to a child or come up with colorful cocktail recipes. The product, called the GPT Store, will include chatbots that users have chosen to share publicly. It will eventually introduce ways for people to make money from their creations -- much as they might through the app stores of Apple or Alphabet's Google.

Similar to those app stores, OpenAI's GPT Store will let users see the most popular and trending chatbots on a leaderboard and search for them by category. In a blog post announcing the rollout, OpenAI said that people have made 3 million custom chatbots thus far, though it was not clear how many were available through its store at launch. The store's launch comes as OpenAI works to build out its ecosystem of services and find new sources of revenue. On Wednesday, OpenAI also announced a new paid ChatGPT tier for companies with smaller teams that starts at $25 a month per user. OpenAI first launched a corporate version of ChatGPT with added features and privacy safeguards in August.

Slashdot Top Deals