Media

Better Than JPEG? Researcher Discovers That Stable Diffusion Can Compress Images (arstechnica.com) 93

An anonymous reader quotes a report from Ars Technica: Last week, Swiss software engineer Matthias Buhlmann discovered that the popular image synthesis model Stable Diffusion could compress existing bitmapped images with fewer visual artifacts than JPEG or WebP at high compression ratios, though there are significant caveats. Stable Diffusion is an AI image synthesis model that typically generates images based on text descriptions (called "prompts"). The AI model learned this ability by studying millions of images pulled from the Internet. During the training process, the model makes statistical associations between images and related words, making a much smaller representation of key information about each image and storing them as "weights," which are mathematical values that represent what the AI image model knows, so to speak.

When Stable Diffusion analyzes and "compresses" images into weight form, they reside in what researchers call "latent space," which is a way of saying that they exist as a sort of fuzzy potential that can be realized into images once they're decoded. With Stable Diffusion 1.4, the weights file is roughly 4GB, but it represents knowledge about hundreds of millions of images. While most people use Stable Diffusion with text prompts, Buhlmann cut out the text encoder and instead forced his images through Stable Diffusion's image encoder process, which takes a low-precision 512x512 image and turns it into a higher-precision 64x64 latent space representation. At this point, the image exists at a much smaller data size than the original, but it can still be expanded (decoded) back into a 512x512 image with fairly good results.

While running tests, Buhlmann found that images compressed with Stable Diffusion looked subjectively better at higher compression ratios (smaller file size) than JPEG or WebP. In one example, he shows a photo of a candy shop that is compressed down to 5.68KB using JPEG, 5.71KB using WebP, and 4.98KB using Stable Diffusion. The Stable Diffusion image appears to have more resolved details and fewer obvious compression artifacts than those compressed in the other formats. Buhlmann's method currently comes with significant limitations, however: It's not good with faces or text, and in some cases, it can actually hallucinate detailed features in the decoded image that were not present in the source image. (You probably don't want your image compressor inventing details in an image that don't exist.) Also, decoding requires the 4GB Stable Diffusion weights file and extra decoding time.
Buhlmann's code and technical details about his findings can be found on Google Colab and Towards AI.
Data Storage

Small Dongle Brings the HDD Clicking Back To SSDs In Retro PCs (hackaday.com) 117

Longtime Slashdot reader root_42 writes: Remember the clicking sounds of spinning hard disks? One "problem" with retro computing is that we replace those disks with compact flash, SD cards or even SSDs. Those do not make any noises that you can hear under usual circumstances, which is partly nice because the computer becomes quieter, but also irritating because sometimes you can't tell if the computer has crashed or is still working. This little device fixes that issue! It's called the HDD Clicker and it's a very unique little gadget. "An ATtiny and a few support components ride on a small PCB along with a piezoelectric speaker," describes Hackaday. "The dongle connects to the hard drive activity light, which triggers a series of clicks from the speaker that sound remarkably like a hard drive heading seeking tracks."

A demo of the device can be viewed at 7:09, with a full defragmentation at 13:11.
Power

When's the Best Time To Charge Your EV? Not at Night, Stanford Study Finds (stanford.edu) 190

The vast majority of electric vehicle owners charge their cars at home in the evening or overnight. We're doing it wrong, according to a new Stanford study. From the report: In March, the research team published a paper on a model they created for charging demand that can be applied to an array of populations and other factors. In the new study, published Sept. 22 in Nature Energy, they applied their model to the whole of the Western United States and examined the stress the region's electric grid will come under by 2035 from growing EV ownership. In a little over a decade, they found, rapid EV growth alone could increase peak electricity demand by up to 25%, assuming a continued dominance of residential, nighttime charging. To limit the high costs of all that new capacity for generating and storing electricity, the researchers say, drivers should move to daytime charging at work or public charging stations, which would also reduce greenhouse gas emissions. This finding has policy and investment implications for the region and its utilities, especially since California moved in late August to ban sales of gasoline-powered cars and light trucks starting in 2035. [...]

Current time-of-use rates encourage consumers to switch electricity use to nighttime whenever possible, like running the dishwasher and charging EVs. This rate structure reflects the time before significant solar and wind power supplies when demand threatened to exceed supply during the day, especially late afternoons in the summer. Today, California has excess electricity during late mornings and early afternoons, thanks mainly to its solar capacity. If most EVs were to charge during these times, then the cheap power would be used instead of wasted. Alternatively, if most EVs continue to charge at night, then the state will need to build more generators -- likely powered by natural gas -- or expensive energy storage on a large scale. Electricity going first to a huge battery and then to an EV battery loses power from the extra stop. At the local level, if a third of homes in a neighborhood have EVs and most of the owners continue to set charging to start at 11 p.m. or whenever electricity rates drop, the local grid could become unstable.

Another issue with electricity pricing design is charging commercial and industrial customers big fees based on their peak electricity use. This can disincentivize employers from installing chargers, especially once half or more of their employees have EVs. The research team compared several scenarios of charging infrastructure availability, along with several different residential time-of-use rates and commercial demand charges. Some rate changes made the situation at the grid level worse, while others improved it. Nevertheless, a scenario of having charging infrastructure that encourages more daytime charging and less home charging provided the biggest benefits, the study found.
"The findings from this paper have two profound implications: the first is that the price signals are not aligned with what would be best for the grid -- and for ratepayers. The second is that it calls for considering investments in a charging infrastructure for where people work," said Ines Azevedo, one of the co-senior authors of the study.

"We need to move quickly toward decarbonizing the transportation sector, which accounts for the bulk of emissions in California," Azevedo continued. "This work provides insight on how to get there. Let's ensure that we pursue policies and investment strategies that allow us to do so in a way that is sustainable."
The Internet

Neal Stephenson's Lamina1 Drops White Paper On Building the Open Metaverse (venturebeat.com) 49

An anonymous reader quotes a report from VentureBeat: Neal Stephenson's Lamina1 blockchain technology startup dropped a white paper today on building the open metaverse. It's quite the manifesto. In the document, the company said its mission is to deliver a Layer 1 blockchain, interoperating tools and decentralized services optimized for the open metaverse -- providing communities with infrastructure, not gatekeepers to build a more immersive internet. The effort includes some new original content: Under active early-stage development, Neal Stephenson's THEEE METAVERSE promises a richly-imagined interactive virtual world with an unforgettable origin story, the paper said. Built on the Lamina1 chain, creators will come to experience Neal's vision and stay to develop their own. Stay tuned for more details, the paper said. [...]

In the paper, Stephenson said, "Inexorable economic forces drive investors to pay artists as little as possible while steering their creative output in the directions that involve the least financial risk." The aim is to correct the sins of the past. The paper said that Web2 introduced a period of rapid innovation and unprecedented access to entertainment, information and goods on a global scale. Streamlined tools and usability brought creators and innovators to the web en masse to build digital storefronts, engage and transact with their customers. Owning and controlling that growing ecosystem of content and personal data became a primary, lucrative initiative for major corporations. Consumer behavior, recorded on centralized company servers, offered constant, privileged insight into how to monetize human emotion and attention, Lamina1 said. At its best, Web3 envisions a better world through the thoughtful redesigning of our online lives, instituting stronger advocacy for our interests, our freedom and our rights, the company said. Much as Web2 flourished with the maturity of tools and services that offered creators and consumers ease of use, the open metaverse will benefit from open protocols for payments and data, and a set of interoperating decentralized services to support virtual worlds. Lamina1 will be the rallying point for an ecosystem of open source tools, open standards and enabling technologies conceived and co-developed with a vibrant community of creators. [...]

Lamina1 said it approaches the open metaverse with a multi-pronged approach: Layer 1 blockchain, metaverse-as-a-Service (MaaS), community economic participation and incentives and original content. Lamina1 said it uses a high-speed Proof-of-Stake (PoS) consensus algorithm, customized to support the needs of content creators -- providing provenance for creatorship and enabling attributive and behavioral characteristics of an object to be minted, customized and composed on-chain. "We chose to start with Avalanche, a robust generalized blockchain that delivers the industry's most scalable and environmentally-efficient chain for managing digital assets to date. This starting point provides Lamina1 with a flexible architecture and an extendable platform to support our goals in data storage, interoperability, integration incentives, carbon-negative operation, messaging, privacy, high-scale payments and identity," the white paper said. Lamina1 said its metaverse services work will explore creating a metaverse browser and it will align itself with the Metaverse Standards Forum.
To enlist community support, the company isn't aligning with Big Tech. "We march waving the pirate flag at the front of the cultural movement, asking both creators and consumers to join the fight for greater agency and ownership -- the fight for an economy that is imagined, produced and owned by its creators," Lamina1 said. "It's going to be hard, and it's going to take heart, but the upside of providing a maker direct access to their market is staggering."

The paper added, "At Lamina1, we believe two things will power expansion and growth in the metaverse -- a straightforward and principled approach to serving a diverse, open and self-sustaining community of makers, and a powerful ecosystem of content and experiences that will drive fans and funding directly to the platform."
Earth

The World's Largest Carbon Removal Project Yet Is Headed For Wyoming (theverge.com) 76

A couple of climate tech startups plan to suck a hell of a lot of carbon dioxide out of the air and trap it underground in Wyoming. The Verge reports: The goal of the new endeavor, called Project Bison, is to build a new facility capable of drawing down 5 million metric tons of carbon dioxide annually by 2030. The CO2 can then be stored deep within the Earth, keeping it out of the atmosphere, where it would have continued to heat up the planet. A Los Angeles-based company called CarbonCapture is building the facility, called a direct air capture (DAC) plant, that is expected to start operations as early as next year. It'll start small and work up to 5 million metric tons a year. If all goes smoothly by 2030, the operation will be orders of magnitude larger than existing direct air capture projects.

CarbonCapture's equipment is modular, which is what the company says makes the technology easy to scale up. The plant itself will be made of modules that look like stacks of shipping containers with vents that air passes through. At first, the modules used for Project Bison will be made at CarbonCapture's headquarters in Los Angeles. In the first phase of the project, expected to be completed next year, around 25 modules will be deployed in Wyoming. Those modules will collectively have the capacity to remove about 12,000 tons of CO2 a year from the air. The plan is to deploy more modules in Wyoming over time and potentially manufacture the modules there one day, too.

Inside each of the 40-foot modules are about 16 "reactors" with "sorbent cartridges" that essentially act as filters that attract CO2. The filters capture about 75 percent of the CO2 from the air that passes over them. Within about 30 to 40 minutes, the filters have absorbed all the CO2 they can. Once the filters are fully saturated, the reactor goes offline so that the filters can be heated up to separate out the CO2. There are many reactors within one module, each running at its own pace so that they're constantly collecting CO2. Together, they generate concentrated streams of CO2 that can then be compressed and sent straight to underground wells for storage. DAC is still very expensive -- it can cost upwards of $600 to capture a ton of carbon dioxide. That figure is expected to come down with time as the technology advances. But for now, it takes a lot of energy to run DAC plants, which contributes to the big price tag. The filters need to reach around 85 degrees Celsius (185 degrees Fahrenheit) for a few minutes, and getting to those kinds of high temperature for DAC plants can get pretty energy-intensive. Eventually, [...] Bison plans to get enough power from new wind and solar installations. When the project is running at its full capacity in 2030, it's expected to use the equivalent of about 2GW of solar energy per year. For comparison, about 3 million photovoltaic panels together generate a gigawatt of solar energy, according to the Department of Energy. But initially, the energy used by Project Bison might have to come from natural gas, according to Corless. So Bison would first need to capture enough CO2 to cancel out the amount of emissions it generates by burning through that gas before it can go on to reduce the amount of CO2 in the atmosphere.
"The geology in Wyoming allows Project Bison to store the captured CO2 on-site near the modules," adds The Verge. "Project Bison plans to permanently store the CO2 it captures underground. Specifically, project leaders are looking at stowing it 12,000 feet underground in 'saline aquifers' -- areas of rock that are saturated with salt water."
Google

Google Partners With Framework To Launch Upgradable and Customizable Chromebook (theverge.com) 14

Framework and Google have announced the new Framework Laptop Chromebook Edition. As the name implies, this is an upgradable, customizable Chromebook from the same company that put out the Framework laptop last year. From a report: User-upgradable laptops are rare enough already, but user-upgradable Chromebooks are nigh unheard of. While the size of the audience for such a device may remain to be seen, it's certainly a step in the right direction for repairability in the laptop space as a whole. Multiple parts of the Framework are user-customizable, though it's not clear whether every part that's adjustable on the Windows Framework can be adjusted on the Chromebook as well. Each part has a QR code on it which, if scanned, brings up the purchase page for the part's replacement. Most excitingly (to me), the Chromebook Edition includes the same expansion card system as the Windows edition, meaning you can choose the ports you want and where to put them. I don't know of any other laptop, Windows or Chrome OS, where you can do this, and it's easily my personal favorite part of Framework's model. You can choose between USB-C, USB-A, microSD, HDMI, DisplayPort, Ethernet, high-speed storage, "and more," per the press release. HDMI, in particular, is a convenient option to have on a Chromebook.
Data Storage

Morgan Stanley Hard Drives With Client Data Turn Up On Auction Site (nytimes.com) 70

An anonymous reader quotes a report from the New York Times: Morgan Stanley Smith Barney has agreed to pay a $35 million fine to settle claims that it failed to protect the personal information of about 15 million customers, the Securities and Exchange Commission said on Tuesday. In a statement announcing the settlement, the S.E.C. described what it called Morgan Stanley's "extensive failures," over a five-year period beginning in 2015, to safeguard customer information, in part by not properly disposing of hard drives and servers that ended up for sale on an internet auction site.

On several occasions, the commission said, Morgan Stanley hired a moving and storage company with no experience or expertise in data destruction services to decommission thousands of hard drives and servers containing the personal information of millions of its customers. The moving company then sold thousands of the devices to a third party, and the devices were then resold on an unnamed internet auction site, the commission said. An information technology consultant in Oklahoma who bought some of the hard drives on the internet chastised Morgan Stanley after he found that he could still access the firm's data on those devices.

Morgan Stanley is "a major financial institution and should be following some very stringent guidelines on how to deal with retiring hardware," the consultant wrote in an email to Morgan Stanley in October 2017, according to the S.E.C. The firm should, at a minimum, get "some kind of verification of data destruction from the vendors you sell equipment to," the consultant wrote, according to the S.E.C. Morgan Stanley eventually bought the hard drives back from the consultant. Morgan Stanley also recovered some of the other devices that it had improperly discarded, but has not recovered the "vast majority" of them, the commission said.
The settlement also notes that Morgan Stanley "had not properly disposed of consumer report information when it decommissioned servers from local offices and branches as part of a 'hardware refresh program' in 2019," reports the Times. "Morgan Stanley later learned that the devices had been equipped with encryption capability, but that it had failed to activate the encryption software for years, the commission said."
EU

Germany's Blanket Data Retention Law Is Illegal, EU Top Court Says (reuters.com) 20

An anonymous reader quotes a report from Reuters: Germany's general data retention law violates EU law, Europe's top court ruled on Tuesday, dealing a blow to member states banking on blanket data collection to fight crime and safeguard national security. The law may only be applied in circumstances where there is a serious threat to national security defined under very strict terms, the Court of Justice of the European Union (CJEU) said. The ruling comes after major attacks by Islamist militants in France, Belgium and Britain in recent years. Governments argue that access to data, especially that collected by telecoms operators, can help prevent such incidents, while operators and civil rights activists oppose such access.

The latest case was triggered after Deutsche Telekom unit Telekom Deutschland and internet service provider SpaceNet AG challenged Germany's data retention law arguing it breached EU rules. The German court subsequently sought the advice of the CJEU which said such data retention can only be allowed under very strict conditions. "The Court of Justice confirms that EU law precludes the general and indiscriminate retention of traffic and location data, except in the case of a serious threat to national security," the judges said. "However, in order to combat serious crime, the member states may, in strict compliance with the principle of proportionality, provide for, inter alia, the targeted or expedited retention of such data and the general and indiscriminate retention of IP addresses," they said.

Data Storage

Last Floppy-Disk Seller Says Airlines Still Order the Old Tech (businessinsider.com) 61

Tom Persky, the founder of floppydisk.com who claims to be the "last man standing in the floppy disk business," said that the airline industry is one of his biggest customers. He talked about this in the new book "Floppy Disk Fever: The Curious Afterlives of a Flexible Medium" by Niek Hilkmann and Thomas Walskaar. Insider reports: "My biggest customers -- and the place where most of the money comes from -- are the industrial users," Persky said, in an interview from the book published online in Eye On Design last week. "These are people who use floppy disks as a way to get information in and out of a machine. Imagine it's 1990, and you're building a big industrial machine of one kind or another. You design it to last 50 years and you'd want to use the best technology available."

Persky added: "Take the airline industry for example. Probably half of the air fleet in the world today is more than 20 years old and still uses floppy disks in some of the avionics. That's a huge consumer." He also said that the medical sector still uses floppy disks. And then there's "hobbyists," who want to "buy ten, 20, or maybe 50 floppy disks."

Data Storage

Meet the Man Who Still Sells Floppy Disks (aiga.org) 113

Eye on Design is the official blog of the US-based professional graphic design organization AIGA. They've just published a fascinating interview with Tom Persky, who calls himself "the last man standing in the floppy disk business." He is the time-honored founder of floppydisk.com, a US-based company dedicated to the selling and recycling of floppy disks. Other services include disk transfers, a recycling program, and selling used and/or broken floppy disks to artists around the world. All of this makes floppydisk.com a key player in the small yet profitable contemporary floppy scene....

Perkins: I was actually in the floppy disk duplication business. Not in a million years did I think I would ever sell blank floppy disks. Duplicating disks in the 1980s and early 1990s was as good as printing money. It was unbelievably profitable. I only started selling blank copies organically over time. You could still go down to any office supply store, or any computer store to buy them. Why would you try to find me, when you could just buy disks off the shelf? But then these larger companies stopped carrying them or went out of business and people came to us. So here I am, a small company with a floppy disk inventory, and I find myself to be a worldwide supplier of this product. My business, which used to be 90% CD and DVD duplication, is now 90% selling blank floppy disks. It's shocking to me....

Q: Where does this focus on floppy disks come from? Why not work with another medium...?

Perkins: When people ask me: "Why are you into floppy disks today?" the answer is: "Because I forgot to get out of the business." Everybody else in the world looked at the future and came to the conclusion that this was a dying industry. Because I'd already bought all my equipment and inventory, I thought I'd just keep this revenue stream. I stuck with it and didn't try to expand. Over time, the total number of floppy users has gone down. However, the number of people who provided the product went down even faster. If you look at those two curves, you see that there is a growing market share for the last man standing in the business, and that man is me....

I made the decision to buy a large quantity, a couple of million disks, and we've basically been living off of that inventory ever since. From time to time, we get very lucky. About two years ago a guy called me up and said: "My grandfather has all this floppy junk in the garage and I want it out. Will you take it?" Of course I wanted to take it off his hands. So, we went back and forth and negotiated a fair price. Without going into specifics, he ended up with two things that he wanted: an empty garage and a sum of money. I ended up with around 50,000 floppy disks and that's a good deal.

In the interview Perkins reveals he has around half a million floppy disks in stock — 3.5-inch, 5.25-inch, 8-inch, "and some rather rare diskettes. Another thing that happened organically was the start of our floppy disk recycling service. We give people the opportunity to send us floppy disks and we recycle them, rather than put them into a landfill. The sheer volume of floppy disks we get in has really surprised me, it's sometimes a 1,000 disks a day."

But he also estimates its use is more widespread than we realize. "Probably half of the air fleet in the world today is more than 20 years old and still uses floppy disks in some of the avionics. That's a huge consumer. There's also medical equipment, which requires floppy disks to get the information in and out of medical devices.... "

And in the end he seems to have a genuine affection for floppy disk technology. "There's this joke in which a three-year-old little girl comes to her father holding a floppy disk in her hand. She says: 'Daddy, Daddy, somebody 3D-printed the save icon.' The floppy disks will be an icon forever."

The interview is excerpted from a new book called Floppy Disk Fever: The Curious Afterlives of a Flexible Medium.

Hat tip for finding the story to the newly-redesigned front page of The Verge.
Security

Uber Investigating Breach of Its Computer Systems (nytimes.com) 27

Uber discovered its computer network had been breached on Thursday, leading the company to take several of its internal communications and engineering systems offline as it investigated the extent of the hack. From a report: The breach appeared to have compromised many of Uber's internal systems, and a person claiming responsibility for the hack sent images of email, cloud storage and code repositories to cybersecurity researchers and The New York Times. "They pretty much have full access to Uber," said Sam Curry, a security engineer at Yuga Labs who corresponded with the person who claimed to be responsible for the breach. "This is a total compromise, from what it looks like."

An Uber spokesman said the company was investigating the breach and contacting law enforcement officials. Uber employees were instructed not to use the company's internal messaging service, Slack, and found that other internal systems were inaccessible, said two employees, who were not authorized to speak publicly. Shortly before the Slack system was taken offline on Thursday afternoon, Uber employees received a message that read, "I announce I am a hacker and Uber has suffered a data breach." The message went on to list several internal databases that the hacker claimed had been compromised.
BleepingComputers adds: According Curry, the hacker also had access to the company's HackerOne bug bounty program, where they commented on all of the company's bug bounty tickets. Curry told BleepingComputer that he first learned of the breach after the attacker left the above comment on a vulnerability report he submitted to Uber two years ago. Uber runs a HackerOne bug bounty program that allows security researchers to privately disclose vulnerabilities in their systems and apps in exchange for a monetary bug bounty reward. These vulnerability reports are meant to be kept confidential until a fix can be released to prevent attackers from exploiting them in attacks.

Curry further shared that an Uber employee said the threat actor had access to all of the company's private vulnerability submissions on HackerOne. BleepingComputer was also told by a source that the attacker downloaded all vulnerability reports before they lost access to Uber's bug bounty program. This likely includes vulnerability reports that have not been fixed, presenting a severe security risk to Uber. HackerOne has since disabled the Uber bug bounty program, cutting off access to the disclosed vulnerabilities.

Data Storage

Five Years of Data Show That SSDs Are More Reliable Than HDDs Over the Long Haul (arstechnica.com) 82

Backup and cloud storage company Backblaze has published data comparing the long-term reliability of solid-state storage drives and traditional spinning hard drives in its data center. Based on data collected since the company began using SSDs as boot drives in late 2018, Backblaze cloud storage evangelist Andy Klein published a report yesterday showing that the company's SSDs are failing at a much lower rate than its HDDs as the drives age. ArsTechnica: Backblaze has published drive failure statistics (and related commentary) for years now; the hard drive-focused reports observe the behavior of tens of thousands of data storage and boot drives across most major manufacturers. The reports are comprehensive enough that we can draw at least some conclusions about which companies make the most (and least) reliable drives. The sample size for this SSD data is much smaller, both in the number and variety of drives tested -- they're mostly 2.5-inch drives from Crucial, Seagate, and Dell, with little representation of Western Digital/SanDisk and no data from Samsung drives at all. This makes the data less useful for comparing relative reliability between companies, but it can still be useful for comparing the overall reliability of hard drives to the reliability of SSDs doing the same work.

Backblaze uses SSDs as boot drives for its servers rather than data storage, and its data compares these drives to HDDs that were also being used as boot drives. The company says these drives handle the storage of logs, temporary files, SMART stats, and other data in addition to booting -- they're not writing terabytes of data every day, but they're not just sitting there doing nothing once the server has booted, either. Over their first four years of service, SSDs fail at a lower rate than HDDs overall, but the curve looks basically the same -- few failures in year one, a jump in year two, a small decline in year three, and another increase in year four. But once you hit year five, HDD failure rates begin going upward quickly -- jumping from a 1.83 percent failure rate in year four to 3.55 percent in year five. Backblaze's SSDs, on the other hand, continued to fail at roughly the same 1 percent rate as they did the year before.

Businesses

Amazon Releases Upgraded Kindle and Kindle Kids Devices For First Time in Three Years (geekwire.com) 50

Amazon unveiled enhanced versions of its Kindle and Kindle Kids e-readers on Tuesday, the first time the tech giant has upgraded its flagship e-reader in nearly three years. From a report: The upgraded Kindle will now include a battery life of up to six weeks, USB-C charging and 16GB of storage. The Kindle Kids version will also come with a one-year subscription to Amazon Kids+. The Kindle will cost $99.99, up from the previous price of $89.99. The Kindle Kids model will cost $119.99, up from $109.99.
Twitter

Extreme California Heat Knocks Key Twitter Data Center Offline (cnn.com) 62

Extreme heat in California has left Twitter without one of its key data centers, and a company executive warned in an internal memo obtained by CNN that another outage elsewhere could result in the service going dark for some of its users. CNN reports: "On September 5th, Twitter experienced the loss of its Sacramento (SMF) datacenter region due to extreme weather. The unprecedented event resulted in the total shutdown of physical equipment in SMF," Carrie Fernandez, the company's vice president of engineering, said in an internal message to Twitter engineers on Friday. Major tech companies usually have multiple data centers, in part to ensure their service can stay online if one center fails; this is known as redundancy.

As a result of the outage in Sacramento, Twitter is in a "non-redundant state," according to Fernandez's Friday memo. She explained that Twitter's data centers in Atlanta and Portland are still operational but warned, "If we lose one of those remaining datacenters, we may not be able to serve traffic to all Twitter's users." The memo goes on to prohibit non-critical updates to Twitter's product until the company can fully restore its Sacramento data center services. "All production changes, including deployments and releases to mobile platforms, are blocked with the exception of those changes required to address service continuity or other urgent operational needs," Fernandez wrote.
In a statement about the Sacramento outage, a Twitter spokesperson told CNN, "There have been no disruptions impacting the ability for people to access and use Twitter at this time. Our teams remain equipped with the tools and resources they need to ship updates and will continue working to provide a seamless Twitter experience."
Security

Retbleed Fix Slugs Linux VM Performance By Up To 70 Percent (theregister.com) 33

VMware engineers have tested the Linux kernel's fix for the Retbleed speculative execution bug, and report it can impact compute performance by a whopping 70 percent. The Register reports: In a post to the Linux Kernel Mailing List titled "Performance Regression in Linux Kernel 5.19", VMware performance engineering staffer Manikandan Jagatheesan reports the virtualization giant's internal testing found that running Linux VMs on the ESXi hypervisor using version 5.19 of the Linux kernel saw compute performance dip by up to 70 percent when using single vCPU, networking fall by 30 percent and storage performance dip by up to 13 percent. Jagatheesan said VMware's testers turned off the Retbleed remediation in version 5.19 of the kernel and ESXi performance returned to levels experienced under version 5.18.

Because speculative execution exists to speed processing, it is no surprise that disabling it impacts performance. A 70 percent decrease in computing performance will, however, have a major impact on application performance that could lead to unacceptable delays for some business processes. VMware's tests were run on Intel Skylake CPUs -- silicon released between 2015 and 2017 that will still be present in many server fleets. Subsequent CPUs addressed the underlying issues that allowed Retbleed and other Spectre-like attacks.

AI

Runway Teases AI-Powered Text-To-Video Editing Using Written Prompts (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: In a tweet posted this morning, artificial intelligence company Runway teased a new feature of its AI-powered web-based video editor that can edit video from written descriptions, often called "prompts." Runway's "Text to Video" demonstration reel shows a text input box that allows editing commands such as "import city street" (suggesting the video clip already existed) or "make it look more cinematic" (applying an effect). It depicts someone typing "remove object" and selecting a streetlight with a drawing tool that then disappears (from our testing, Runway can already perform a similar effect using its "inpainting" tool, with mixed results). The promotional video also showcases what looks like still-image text-to-image generation similar to Stable Diffusion (note that the video does not depict any of these generated scenes in motion) and demonstrates text overlay, character masking (using its "Green Screen" feature, also already present in Runway), and more.

Video generation promises aside, what seems most novel about Runway's Text to Video announcement is the text-based command interface. Whether video editors will want to work with natural language prompts in the future remains to be seen, but the demonstration shows that people in the video production industry are actively working toward a future in which synthesizing or editing video is as easy as writing a command. [...] Runway is available as a web-based commercial product that runs in the Google Chrome browser for a monthly fee, which includes cloud storage for about $35 per year. But the Text to Video feature is in closed "Early Access" testing, and you can sign up for the waitlist on Runway's website.

Android

Android 13 Raises Minimum System Requirement To 2GB of RAM, 16GB of storage 50

Android 13 has recently hit the streets, and with it, Google is raising the minimum requirements for Android phones. From a report: Google's latest blog post announced that the minimum amount of RAM for Android Go, the low-end version of Android, is now 2GB for Android 13, whereas previously, it was 1GB. Esper's Mishaal Rahman and Google Product Expert Jason Bayton also claim the minimum storage requirements have been bumped up to 16GB, though Google doesn't seem to have publicly documented this anywhere. The increase in system requirements means any phone that doesn't meet the minimum specs won't be able to update to Android 13. New phones launching with Android 13 will need to meet the minimum requirements to be eligible for Play Store licensing, though launching with an older version of Android (with lower requirements) will still be an option for a while. Technically, anyone can grab the Android source code and build anything with it, but if you want to license the Google apps and have access to the Google-trademarked brand "Android," you'll need to comply with Google's rules.
Intel

Asus Packs 12-Core Intel i7 Into a Raspberry Pi-Sized Board (theregister.com) 30

An anonymous reader quotes a report from The Register: The biz's GENE-ADP6, announced this week, can pack as much as a 12-core/16-thread Intel processor with Iris Xe graphics into a 3.5-inch form factor. The diminutive system is aimed at machine-vision applications and can be configured with your choice of Intel silicon including Celeron, or Core i3, i5, or a choice of 10 or 12-core i7 processors. As with other SBCs we've seen from Aaeon and others, the processors aren't socketed so you won't be upgrading later. This device is pretty much aimed at embedded and industrial use, mind. All five SKUs are powered by Intel's current-gen Alder Lake mobile processor family, including a somewhat unusual 5-core Celeron processor that pairs a single performance core with four efficiency cores. However, only the i5 and i7 SKUs come equipped with Intel's Iris Xe integrated graphics. The i3 and Celeron are stuck on UHD graphics. The board can be equipped with up to 64GB of DDR5 memory operating at up to 4800 megatransfers/sec by way of a pair of SODIMM modules.

For I/O the board features a nice set of connectivity including a pair of NICs operating at 2.5 Gbit/sec and 1 Gbit/sec, HDMI 2.1 and Display Port 1.4, three 10Gbit/sec-capable USB 3.2 Gen 2 ports, and a single USB-C port that supports up to 15W of power delivery and display out. For those looking for additional connectivity for their embedded applications, the system also features a plethora of pin headers for USB 2.0, display out, serial interfaces, and 8-bit GPIO. Storage is provided by your choice of a SATA 3.0 interface or a m.2 mSATA/NVMe SSD. Unlike Aaeon's Epic-TGH7 announced last month, the GENE-ADP6 is too small to accommodate a standard PCIe slot, but does feature a FPC connector, which the company says supports additional NVMe storage or external graphics by way of a 4x PCIe 4.0 interface.

Data Storage

US State of Virginia Has More Datacenter Capacity Than Europe or China (theregister.com) 42

The state of Virginia has over a third of America's hyperscale datacenter capacity, and this amounts to more than the entire capacity of China or the whole of Europe, highlighting just how much infrastructure is concentrated along the so-called Datacenter Alley. The Register reports: These figures come from Synergy Research Group, which said that the US accounts for 53 percent of global hyperscale datacenter capacity, as measured by critical IT load, at the end of the second quarter of 2022. The remainder is relatively evenly split between China, Europe, and the rest of the world. While few would be surprised at the US accounting for the lion's share of datacenter capacity, the fact that so much is concentrated in one state could raise a few eyebrows, especially when it is centered on a small number of counties in Northern Virginia -- typically Loudoun, Prince William, and Fairfax -- which make up Datacenter Alley.

"Hyperscale operators take a lot of factors into account when deciding where to locate their datacenter infrastructure," said Synergy chief analyst John Dinsdale. "This includes availability of suitable real estate, cost and availability of power supply options, proximity to customers, the risk of natural disasters, local incentives and approvals processes, the ease of doing business and internal business dynamics, and this has inevitably led to some hyperscale hot spots." Amazon in particular locates a large amount of its datacenter infrastructure in Northern Virginia, with Microsoft, Facebook, Google, ByteDance, and others also having a major presence, according to Synergy. The big three cloud providers -- Amazon, Microsoft and Google -- have the broadest hyperscale bit barn footprint, with each of these having over 130 datacenters of the 800 or so around the globe. When measured in datacenter capacity, the leading companies are Amazon, Google, Microsoft, Facebook, Alibaba and Tencent, according to Synergy.

Facebook

Facebook Engineers: We Have No Idea Where We Keep All Your Personal Data (theintercept.com) 69

An anonymous reader quotes a report from The Intercept: In March, two veteran Facebook engineers found themselves grilled about the company's sprawling data collection operations in a hearing for the ongoing lawsuit over the mishandling of private user information stemming from the Cambridge Analytica scandal. The hearing, a transcript of which was recently unsealed (PDF), was aimed at resolving one crucial issue: What information, precisely, does Facebook store about us, and where is it? The engineers' response will come as little relief to those concerned with the company's stewardship of billions of digitized lives: They don't know.

The admissions occurred during a hearing with special master Daniel Garrie, a court-appointed subject-matter expert tasked with resolving a disclosure impasse. Garrie was attempting to get the company to provide an exhaustive, definitive accounting of where personal data might be stored in some 55 Facebook subsystems. Both veteran Facebook engineers, with according to LinkedIn two decades of experience between them, struggled to even venture what may be stored in Facebook's subsystems. "I'm just trying to understand at the most basic level from this list what we're looking at," Garrie asked. "I don't believe there's a single person that exists who could answer that question," replied Eugene Zarashaw, a Facebook engineering director. "It would take a significant team effort to even be able to answer that question." When asked about how Facebook might track down every bit of data associated with a given user account, Zarashaw was stumped again: "It would take multiple teams on the ad side to track down exactly the -- where the data flows. I would be surprised if there's even a single person that can answer that narrow question conclusively." [...]

Facebook's stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level. In the March 2022 hearing, Zarashaw and Steven Elia, a software engineering manager, described Facebook as a data-processing apparatus so complex that it defies understanding from within. The hearing amounted to two high-ranking engineers at one of the most powerful and resource-flush engineering outfits in history describing their product as an unknowable machine. The special master at times seemed in disbelief, as when he questioned the engineers over whether any documentation existed for a particular Facebook subsystem. "Someone must have a diagram that says this is where this data is stored," he said, according to the transcript. Zarashaw responded: "We have a somewhat strange engineering culture compared to most where we don't generate a lot of artifacts during the engineering process. Effectively the code is its own design document often." He quickly added, "For what it's worth, this is terrifying to me when I first joined as well."

Slashdot Top Deals