Stats

What Were Slashdot's Top 10 Stories of 2023? 22

Slashdot's 10 most-visited stories of 2023 seemed to touch on all the themes of the year, with a story about AI, two about electric cars, two stories about Linux, and two about the Rust programming language.

And at the top of this list, the #1 story of the year drew over 100,000 views...

Interestingly, a story that ran on New Year's Eve of 2022 attracted so much traffic, it would've been the second-most visited story for all of 2023 — if it had run just a few hours later. That story?

Systemd's Growth Over 2022.

Social Networks

The Rise and Fall of Usenet (zdnet.com) 130

An anonymous reader quotes a report from ZDNet: Long before Facebook existed, or even before the Internet, there was Usenet. Usenet was the first social network. Now, with Google Groups abandoning Usenet, this oldest of all social networks is doomed to disappear. Some might say it's well past time. As Google declared, "Over the last several years, legitimate activity in text-based Usenet groups has declined significantly because users have moved to more modern technologies and formats such as social media and web-based forums. Much of the content being disseminated via Usenet today is binary (non-text) file sharing, which Google Groups does not support, as well as spam." True, these days, Usenet's content is almost entirely spam, but in its day, Usenet was everything that Twitter and Reddit would become and more.

In 1979, Duke University computer science graduate students Tom Truscott and Jim Ellis conceived of a network of shared messages under various topics. These messages, also known as articles or posts, were submitted to topic categories, which became known as newsgroups. Within those groups, messages were bound together in threads and sub-threads. [...] In 1980, Truscott and Ellis, using the Unix to Unix Copy Protocol (UUCP), hooked up with the University of North Carolina to form the first Usenet nodes. From there, it would rapidly spread over the pre-Internet ARPANet and other early networks. These messages would be stored and retrieved from news servers. These would "peer" to each other so that messages to a newsgroup would be shared from server to server and to user to user so that within hours, your messages would reach the entire networked world. Usenet would evolve its own network protocol, Network News Transfer Protocol (NNTP), to speed the transfer of these messages. Today, the social network Mastodon uses a similar approach with the ActivityPub protocol, while other social networks, such as Threads, are exploring using ActivityPub to connect with Mastodon and the other social networks that support ActivityPub. As the saying goes, everything old is new again.

[...] Usenet was never an organized social network. Each server owner could -- and did -- set its own rules. Mind you, there was some organization to begin with. The first 'mainstream' Usenet groups, comp, misc, news, rec, soc, and sci hierarchies, were widely accepted and disseminated until 1987. Then, faced with a flood of new groups, a new naming plan emerged in what was called the Great Renaming. This led to a lot of disputes and the creation of the talk hierarchy. This and the first six became known as the Big Seven. Then the alt groups emerged as a free speech protest. Afterward, fewer Usenet sites made it possible to access all the newsgroups. Instead, maintainers and users would have to decide which one they'd support. Over the years, Usenet began to decline as discussions were replaced both by spam and flame wars. Group discussions were also overwhelmed by flame wars.
"If, going forward, you want to keep an eye on Usenet -- things could change, miracles can happen -- you'll need to get an account from a Usenet provider," writes ZDNet's Steven Vaughan-Nichols. "I favor Eternal September, which offers free access to the discussion Usenet groups; NewsHosting, $9.99 a month with access to all the Usenet groups; EasyNews, $9.98 a month with fast downloads, and a good search engine; and Eweka, 9.50 Euros a month and EU only servers."

"You'll also need a Usenet client. One popular free one is Mozilla's Thunderbird E-Mail client, which doubles as a Usenet client. EasyNews also offers a client as part of its service. If you're all about downloading files, check out SABnzbd."
Microsoft

When Linux Spooked Microsoft: Remembering 1998's Leaked 'Halloween Documents' (catb.org) 59

It happened a quarter of a century ago. The New York Times wrote that "An internal memorandum reflecting the views of some of Microsoft's top executives and software development managers reveals deep concern about the threat of free software and proposes a number of strategies for competing against free programs that have recently been gaining in popularity." The memo warns that the quality of free software can meet or exceed that of commercial programs and describes it as a potentially serious threat to Microsoft. The document was sent anonymously last week to Eric Raymond, a key figure in a loosely knit group of software developers who collaboratively create and distribute free programs ranging from operating systems to Web browsers. Microsoft executives acknowledged that the document was authentic...

In addition to acknowledging that free programs can compete with commercial software in terms of quality, the memorandum calls the free software movement a "long-term credible" threat and warns that employing a traditional Microsoft marketing strategy known as "FUD," an acronym for "fear, uncertainty and doubt," will not succeed against the developers of free software. The memorandum also voices concern that Linux is rapidly becoming the dominant version of Unix for computers powered by Intel microprocessors.

The competitive issues, the note warns, go beyond the fact that the software is free. It is also part of the open-source software, or O.S.S., movement, which encourages widespread, rapid development efforts by making the source code — that is, the original lines of code written by programmers — readily available to anyone. This enables programmers the world over to continually write or suggest improvements or to warn of bugs that need to be fixed. The memorandum notes that open software presents a threat because of its ability to mobilize thousands of programmers. "The ability of the O.S.S. process to collect and harness the collective I.Q. of thousands of individuals across the Internet is simply amazing," the memo states. "More importantly, O.S.S. evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale."

Back in 1998, Slashdot's CmdrTaco covered the whole brouhaha — including this CNN article: A second internal Microsoft memo on the threat Linux poses to Windows NT calls the operating system "a best-of-breed Unix" and wonders aloud if the open-source operating system's momentum could be slowed in the courts.

As with the first "Halloween Document," the memo — written by product manager Vinod Valloppillil and another Microsoft employee, Josh Cohen — was obtained by Linux developer Eric Raymond and posted on the Internet. In it, Cohen and Valloppillil, who also authored the first "Halloween Document," appear to suggest that Microsoft could slow the open-source development of Linux with legal battles. "The effect of patents and copyright in combating Linux remains to be investigated," the duo wrote.

Microsoft's slogain in 1998 was "Where do you want to go today?" So Eric Raymond published the documents on his web site under the headline "Where will Microsoft try to drag you today? Do you really want to go there?"

25 years later, and it's all still up there and preserved for posterity on Raymond's web page — a collection of leaked Microsoft documents and related materials known collectively as "the Halloween documents." And Raymond made a point of thanking the writers of the documents, "for authoring such remarkable and effective testimonials to the excellence of Linux and open-source software in general."

Thanks to long-time Slashdot reader mtaht for remembering the documents' 25th anniversary...
Python

Experimental Project Attempts a Python Virtual Shell for Linux (cjshayward.com) 62

Long-time Slashdot reader CJSHayward shares "an attempt at Python virtual shell."

The home-brewed project "mixes your native shell with Python with the goal of letting you use your regular shell but also use Python as effectively a shell scripting language, as an alternative to your shell's built-in scripting language... I invite you to explore and improve it!"

From the web site: The Python Virtual Shell (pvsh or 'p' on the command line) lets you mix zsh / bash / etc. built-in shell scripting with slightly modified Python scripting. It's kind of like Brython [a Python implementation for client-side web programming], but for the Linux / Unix / Mac command line...

The core concept is that all Python code is indented with tabs, with an extra tab at the beginning to mark Python code, and all shell commands (including some shell builtins) have zero tabs of indentation. They can be mixed line-by-line, offering an opportunity to use built-in zsh, bash, etc. scripting or Python scripting as desired.

The Python is an incomplete implementation; it doesn't support breaking a line into multiple lines. Nonetheless, this offers a tool to fuse shell- and Python-based interactions from the Linux / Unix / Mac command line.

Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
GNU is Not Unix

GNU Celebrates Its 40th Anniversary (fsf.org) 49

Wednesday the Free Software Foundation celebrated "the 40th anniversary of the GNU operating system and the launch of the free software movement," with an announcement calling it "a turning point in the history of computing.

"Forty years later, GNU and free software are even more relevant. While software has become deeply ingrained into everyday life, the vast majority of users do not have full control over it... " On September 27, 1983, a computer scientist named Richard Stallman announced the plan to develop a free software Unix-like operating system called GNU, for "GNU's not Unix." GNU is the only operating system developed specifically for the sake of users' freedom, and has remained true to its founding ideals for forty years. Since 1983, the GNU Project has provided a full, ethical replacement for proprietary operating systems. This is thanks to the forty years of tireless work from volunteer GNU developers around the world.

When describing GNU's history and the background behind its initial announcement, Stallman (often known simply as "RMS") stated, "with a free operating system, we could again have a community of cooperating hackers — and invite anyone to join. And anyone would be able to use a computer without starting out by conspiring to deprive his or her friends."

"When we look back at the history of the free software movement — or the idea that users should be in control of their own computing — it starts with GNU," said Zoë Kooyman, executive director of the FSF, which sponsors GNU's development. "The GNU System isn't just the most widely used operating system that is based on free software. GNU is also at the core of a philosophy that has guided the free software movement for forty years."

Usually combined with the kernel Linux, GNU forms the backbone of the Internet and powers millions of servers, desktops, and embedded computing devices. Aside from its technical advancements, GNU pioneered the concept of "copyleft," the approach to software licensing that requires the same rights to be preserved in derivative works, and is best exemplified by the GNU General Public License (GPL). As Stallman stated, "The goal of GNU was to give users freedom, not just to be popular. So we needed to use distribution terms that would prevent GNU software from being turned into proprietary software. The method we use is called 'copyleft.'"

The free software community has held strong for forty years and continues to grow, as exemplified by the FSF's annual LibrePlanet conference on software freedom and digital ethics.

Kooyman continues, "We hope that the fortieth anniversary will inspire hackers, both old and new, to join GNU in its goal to create, improve, and share free software around the world. Software is controlling our world these days, and GNU is a critique and solution to the status quo that we desperately need in order to not have our technology control us."

"In honor of GNU's fortieth anniversary, its organizational sponsor the FSF is organizing a hackday for families, students, and anyone interested in celebrating GNU's anniversary. It will be held at the FSF's offices in Boston, MA on October 1."
Open Source

The Future of Open Source is Still Very Much in Flux (technologyreview.com) 49

Free and open software have transformed the tech industry. But we still have a lot to work out to make them healthy, equitable enterprises. From a report: When Xerox donated a new laser printer to MIT in 1980, the company couldn't have known that the machine would ignite a revolution. While the early decades of software development generally ran on a culture of open access, this new printer ran on inaccessible proprietary software, much to the horror of Richard M. Stallman, then a 27-year-old programmer at the university.

A few years later, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. The free-software movement was born, with a simple premise: for the good of the world, all code should be open, without restriction or commercial intervention. Forty years later, tech companies are making billions on proprietary software, and much of the technology around us is inscrutable. But while Stallman's movement may look like a failed experiment, the free and open-source software movement is not only alive and well; it has become a keystone of the tech industry.

Python

Codon Compiler For Python Is Fast - but With Some Caveats (usenix.org) 36

For 16 years, Rik Farrow has been an editor for the long-running nonprofit Usenix. He's also been a consultant for 43 years (according to his biography at Usenix.org) — and even wrote the 1988 book Unix System Security: How to Protect Your Data and Prevent Intruders.

Today Farrow stopped by Slashdot to share his thoughts on Codon. rikfarrow writes: Researchers at MIT decided to build a compiler focused on speeding up genomics processing... Recently, they have posted their code on GitHub, and I gave it a test drive.
"Managed" languages produce code for a specific runtime (like JavaScript). Now Farrow's article at Usenix.org argues that Codon produces code "much faster than other managed languages, and in some cases faster than C/C++."

Codon-compiled code is faster because "it's compiled, variables are typed at compile time, and it supports parallel execution." But there's some important caveats: The "version of Python" part is actually an important point: the builders of Codon have built a compiler that accepts a large portion of Python, including all of the most commonly used parts — but not all... Duck typing means that the Codon compiler uses hints found in the source or attempts to deduce them to determine the correct type, and assigns that as a static type. If you wanted to process data where the type is unknown before execution, this may not work for you, although Codon does support a union type that is a possible workaround. In most cases of processing large data sets, the types are known in advance so this is not an issue...

Codon is not the same as Python, in that the developers have not yet implemented all the features you would find in Python 3.10, and this, along with duck typing, will likely cause problems if you just try and compile existing scripts. I quickly ran into problems, as I uncovered unsupported bits of Python, and, by looking at the Issues section of their Github pages, so have other people.

Codon supports a JIT feature, so that instead of attempting to compile complete scripts, you can just add a @codon.jit decorator to functions that you think would benefit from being compiled or executed in parallel, becoming much faster to execute...

Whether your projects will benefit from experimenting with Codon will mean taking the time to read the documentation. Codon is not exactly like Python. For example, there's support for Nvidia GPUs included as well and I ran into a limitation when using a dictionary. I suspect that some potential users will appreciate that Codon takes Python as input and produces executables, making the distribution of code simpler while avoiding disclosure of the source. Codon, with its LLVM backend, also seems like a great solution for people wanting to use Python for embedded projects.

My uses of Python are much simpler: I can process millions of lines of nginx logs in seconds, so a reduction in execution time means little to me. I do think there will be others who can take full advantage of Codon.

Farrow's article also points out that Codon "must be licensed for commercial use, but versions older than three years convert to an Apache license. Non-commercial users are welcome to experiment with Codon."
Desktops (Apple)

Unix Pioneer Ken Thompson Announces He's Switching From Mac To Linux (youtube.com) 175

The closing keynote at the SCaLE 20x conference was delivered by 80-year-old Ken Thompson (co-creator of Unix, Plan9, UTF8, and the Go programming language).

Slashdot reader motang shared Thompson's answer to a question at the end about what operating system he uses today: I have, for most of my life — because I was sort of born into it — run Apple.

Now recently, meaning within the last five years, I've become more and more depressed, and what Apple is doing to something that should allow you to work is just atrocious. But they are taking a lot of space and time to do it, so it's okay.

And I have come, within the last month or two, to say, even though I've invested, you know, a zillion years in Apple — I'm throwing it away. And I'm going to Linux. To Raspbian in particular.

IBM

The SCO Lawsuit: Looking Back 20 Years Later (lwn.net) 105

"On March 7, 2003, a struggling company called The SCO Group filed a lawsuit against IBM," writes LWN.net, "claiming that the success of Linux was the result of a theft of SCO's technology..."

Two decades later, "It is hard to overestimate how much the community we find ourselves in now was shaped by a ridiculous lawsuit 20 years ago...." It was the claim of access to Unix code that was the most threatening allegation for the Linux community. SCO made it clear that, in its opinion, Linux was stolen property: "It is not possible for Linux to rapidly reach UNIX performance standards for complete enterprise functionality without the misappropriation of UNIX code, methods or concepts". To rectify this "misappropriation", SCO was asking for a judgment of at least $1 billion, later increased to $5 billion. As the suit dragged on, SCO also started suing Linux users as it tried to collect a tax for use of the system.

Though this has never been proven, it was widely assumed at the time that SCO's real objective was to prod IBM into acquiring the company. That would have solved SCO's ongoing business problems and IBM, for rather less than the amount demanded in court, could have made an annoying problem go away and also lay claim to the ownership of Unix — and, thus, Linux. To SCO's management, it may well have seemed like a good idea at the time. IBM, though, refused to play that game; the company had invested heavily into Linux in its early days and was uninterested in allowing any sort of intellectual-property taint to attach to that effort. So the company, instead, directed its not inconsiderable legal resources to squashing this attack. But notably, so did the development community as a whole, as did much of the rest of the technology industry.

Over the course of the following years — far too many years — SCO's case fell to pieces. The "misappropriated" technology wasn't there. Due to what must be one of the worst-written contracts in technology-industry history, it turned out that SCO didn't even own the Unix copyrights it was suing over. The level of buffoonery was high from the beginning and got worse; the company lost at every turn and eventually collapsed into bankruptcy.... Microsoft, which had not yet learned to love Linux, funded SCO and loudly bought licenses from the company. Magazines like Forbes were warning the "Linux-loving crunchies in the open-source movement" that they "should wake up". SCO was suggesting a license fee of $1,399 — per-CPU — to run Linux.... Such an effort, in less incompetent hands, could easily have damaged Linux badly.

As it went, SCO, despite its best efforts, instead succeeded in improving the position of Linux — in development, legal, and economic terms — considerably.

The article argues SCO's lawsuit ultimately proved that Linux didn't contain copyrighted code "in a far more convincing way than anybody else could have." (And the provenance of all Linux code contributions are now carefully documented.) The case also proved the need for lawyers to vigorously defend the rights of open source programmers. And most of all, it revealed the Linux community was widespread and committed.

And "Twenty years later, it is fair to say that Linux is doing a little better than The SCO Group. Its swaggering leader, who thought to make his fortune by taxing Linux, filed for personal bankruptcy in 2020."
Programming

GitHub Claims Source Code Search Engine Is a Game Changer (theregister.com) 39

Thomas Claburn writes via The Register: GitHub has a lot of code to search -- more than 200 million repositories -- and says last November's beta version of a search engine optimized for source code that has caused a "flurry of innovation." GitHub engineer Timothy Clem explained that the company has had problems getting existing technology to work well. "The truth is from Solr to Elasticsearch, we haven't had a lot of luck using general text search products to power code search," he said in a GitHub Universe video presentation. "The user experience is poor. It's very, very expensive to host and it's slow to index." In a blog post on Monday, Clem delved into the technology used to scour just a quarter of those repos, a code search engine built in Rust called Blackbird.

Blackbird currently provides access to almost 45 million GitHub repositories, which together amount to 115TB of code and 15.5 billion documents. Shifting through that many lines of code requires something stronger than grep, a common command line tool on Unix-like systems for searching through text data. Using ripgrep on an 8-core Intel CPU to run an exhaustive regular expression query on a 13GB file in memory, Clem explained, takes about 2.769 seconds, or 0.6GB/sec/core. [...] At 0.01 queries per second, grep was not an option. So GitHub front-loaded much of the work into precomputed search indices. These are essentially maps of key-value pairs. This approach makes it less computationally demanding to search for document characteristics like the programming language or word sequences by using a numeric key rather than a text string. Even so, these indices are too large to fit in memory, so GitHub built iterators for each index it needed to access. According to Clem, these lazily return sorted document IDs that represent the rank of the associated document and meet the query criteria.

To keep the search index manageable, GitHub relies on sharding -- breaking the data up into multiple pieces using Git's content addressable hashing scheme and on delta encoding -- storing data differences (deltas) to reduce the data and metadata to be crawled. This works well because GitHub has a lot of redundant data (e.g. forks) -- its 115TB of data can be boiled down to 25TB through deduplication data-shaving techniques. The resulting system works much faster than grep -- 640 queries per second compared to 0.01 queries per second. And indexing occurs at a rate of about 120,000 documents per second, so processing 15.5 billion documents takes about 36 hours, or 18 for re-indexing since delta (change) indexing reduces the number of documents to be crawled.

Oracle

Six Years Later, HPE and Oracle Quietly Shut Door On Solaris Lawsuit (theregister.com) 10

HPE and Oracle have settled their long-running legal case over alleged copyright infringement regarding Solaris software updates for HPE customers, but it looks like the nature of the settlement is going to remain under wraps. The Register reports: The pair this week informed [PDF] the judge overseeing the case that they'd reached a mutual settlement and asked for the case to be dismissed "with prejudice" -- ie, permanently. The settlement agreement is confidential, and its terms won't be made public. The case goes back to at least 2016, when Oracle filed a lawsuit against HPE over the rights to support the Solaris operating system. HPE and a third company, software support outfit Terix, were accused of offering Solaris support for customers while the latter was not an authorized Oracle partner.

Big Red's complaint claimed HPE had falsely represented to customers that it and Terix could lawfully provide Solaris Updates and other support services at a lower cost than Oracle, and that the two had worked together to provide customers with access to such updates. The suit against HPE was thrown out of court in 2019, but revived in 2021 when a judge denied HPE's motion for a summary judgement in the case. Terix settled its case in 2015 for roughly $58 million. Last year, the case went to court and in June a jury found HPE guilty of providing customers with Solaris software updates without Oracle's permission, awarding the latter $30 million for copyright infringement.

But that wasn't the end of the matter, because HPE was back a couple of months later to appeal the verdict, claiming the complaint by Oracle that it had directly infringed copyrights with regard to Solaris were not backed by sufficient evidence. This hinged on HPE claiming that Oracle had failed to prove that any of the patches and updates in question were actually protected by copyright, but also that Oracle could not prove HPE had any control over Terix in its purported infringement activities. Oracle for its part filed a motion asking the court for a permanent injunction against HPE to prevent it copying or distributing the Solaris software, firmware or support materials, except as allowed by Oracle. Now it appears that the two companies have come to some mutually acceptable out-of-court arrangement, as often happens in acrimonious and long-running legal disputes.

Unix

OSnews Decries 'The Mass Extinction of Unix Workstations' (osnews.com) 284

Anyone remember the high-end commercial UNIX workstations from a few decades ago — like from companies like IBM, DEC, SGI, and Sun Microsystems?

Today OSnews looked back — but also explored what happens when you try to buy one today> : As x86 became ever more powerful and versatile, and with the rise of Linux as a capable UNIX replacement and the adoption of the NT-based versions of Windows, the days of the UNIX workstations were numbered. A few years into the new millennium, virtually all traditional UNIX vendors had ended production of their workstations and in some cases even their associated architectures, with a lacklustre collective effort to move over to Intel's Itanium — which didn't exactly go anywhere and is now nothing more than a sour footnote in computing history.

Approaching roughly 2010, all the UNIX workstations had disappeared.... and by now, they're all pretty much dead (save for Solaris). Users and industries moved on to x86 on the hardware side, and Linux, Windows, and in some cases, Mac OS X on the software side.... Over the past few years, I have come to learn that If you want to get into buying, using, and learning from UNIX workstations today, you'll run into various problems which can roughly be filed into three main categories: hardware availability, operating system availability, and third party software availability.

Their article details their own attempts to buy one over the years, ultimately concluding the experience "left me bitter and frustrated that so much knowledge — in the form of documentation, software, tutorials, drivers, and so on — is disappearing before our very eyes." Shortsightedness and disinterest in their own heritage by corporations, big and small, is destroying entire swaths of software, and as more years pass by, it will get ever harder to get any of these things back up and running.... As for all the third-party software — well, I'm afraid it's too late for that already. Chasing down the rightsholders is already an incredibly difficult task, and even if you do find them, they are probably not interested in helping you, and even if by some miracle they are, they most likely no longer even have the ability to generate the required licenses or release versions with the licensing ripped out. Stuff like Pro/ENGINEER and SoftWindows for UNIX are most likely gone forever....

Software is dying off at an alarming rate, and I fear there's no turning the tide of this mass extinction.

The article also wonders why companies like HPE don't just "dump some ISO files" onto an FTP server, along with patch depots and documentation. "This stuff has no commercial value, they're not losing any sales, and it will barely affect their bottom line.
Books

Cheeky New Book Identifies 26 Lines of Code That Changed the World (thenewstack.io) 48

Long-time Slashdot reader destinyland writes: A new book identifies "26 Lines of Code That Changed the World." But its cheeky title also incorporates a comment from Unix's source code — "You are Not Expected to Understand This". From a new interview with the book's editor:

With chapter titles like "Wear this code, go to jail" and "the code that launched a million cat videos," each chapter offers appreciations for programmers, gathering up stories about not just their famous lives but their sometimes infamous works. (In Chapter 10 — "The Accidental Felon" — journalist Katie Hafner reveals whatever happened to that Harvard undergraduate who went on to inadvertently create one of the first malware programs in 1988...) The book quickly jumps from milestones like the Jacquard Loom and the invention of COBOL to bitcoin and our thought-provoking present, acknowledging both the code that guided the Apollo 11 moon landing and the code behind the 1962 videogame Spacewar. The Smithsonian Institution's director for their Center for the Study of Invention and Innovation writes in Chapter 4 that the game "symbolized a shift from computing being in the hands of priest-like technicians operating massive computers to enthusiasts programming and hacking, sometimes for the sheer joy of it."

I contributed chapter 9, about a 1975 comment in some Unix code that became "an accidental icon" commemorating a "momentary glow of humanity in a world of unforgiving logic." This chapter provided the book with its title. (And I'm also responsible for the book's index entry for "Linux, expletives in source code of".) In a preface, the book's editor describes the book's 29 different authors as "technologists, historians, journalists, academics, and sometimes the coders themselves," explaining "how code works — or how, sometimes, it doesn't work — owing in no small way to the people behind it."

"I've been really interested over the past several years to watch the power of the tech activists and tech labor movements," the editor says in this interview. "I think they've shown really immense power to effect change, and power to say, 'I'm not going to work on something that doesn't align with what I want for the future.' That's really something to admire.

"But of course, people are up against really big forces...."

Security

OpenSSL Warns of Critical Security Vulnerability With Upcoming Patch (zdnet.com) 31

An anonymous reader quotes a report from ZDNet: Everyone depends on OpenSSL. You may not know it, but OpenSSL is what makes it possible to use secure Transport Layer Security (TLS) on Linux, Unix, Windows, and many other operating systems. It's also what is used to lock down pretty much every secure communications and networking application and device out there. So we should all be concerned that Mark Cox, a Red Hat Distinguished Software Engineer and the Apache Software Foundation (ASF)'s VP of Security, this week tweeted, "OpenSSL 3.0.7 update to fix Critical CVE out next Tuesday 1300-1700UTC." How bad is "Critical"? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable. It's likely to be abused to disclose server memory contents, and potentially reveal user details, and could be easily exploited remotely to compromise server private keys or execute code execute remotely. In other words, pretty much everything you don't want happening on your production systems.

The last time OpenSSL had a kick in its security teeth like this one was in 2016. That vulnerability could be used to crash and take over systems. Even years after it arrived, security company Check Point estimated it affected over 42% of organizations. This one could be worse. We can only hope it's not as bad as that all-time champion of OpenSSL's security holes, 2014's HeartBleed. [...] There is another little silver lining in this dark cloud. This new hole only affects OpenSSL versions 3.0.0 through 3.0.6. So, older operating systems and devices are likely to avoid these problems. For example, Red Hat Enterprise Linux (RHEL) 8.x and earlier and Ubuntu 20.04 won't be smacked by it. RHEL 9.x and Ubuntu 22.04, however, are a different story. They do use OpenSSL 3.x. [...] But, if you're using anything with OpenSSL 3.x in -- anything -- get ready to patch on Tuesday. This is likely to be a bad security hole, and exploits will soon follow. You'll want to make your systems safe as soon as possible.

Operating Systems

OpenBSD 7.2 Released 21

Longtime Slashdot reader lazyeye writes: The 53rd release of OpenBSD, version 7.2, has officially been released. Support for new platforms such as the Ampere Altra, Apple M2 chip, and support for Lenovo ThinkPad x13s and other machines using the Qualcomm Snapdragon 8cx Gen 3 (SC8280XP) SoC are now included, along with various kernel improvements. The announcement with all the details are available at the link [here] from the openbsd-announce mailing list.
Books

'Linux IP Stacks Commentary' Book Tries Free Online Updates (satchell.net) 13

Recently the authors of Elements of Publishing shared an update. "After ten years in print, our publisher decided against further printings and has reverted the rights to us. We are publishing Elements of Programming in two forms: a free PDF and a no-markup paperback."

And that's not the only old book that's getting a new life on the web...

22 years ago, long-time Slashdot reader Stephen T. Satchell (satch89450) co-authored Linux IP Stacks Commentary, a book commenting the TCP/IP code in Linux kernel 2.0.34. ("Old-timers will remember the Lion's Unix Commentary, the book published by University xerographic copies on the sly. Same sort of thing.") But the print edition struggled to update as frequently as the Linux kernel itself, and Satchell wrote a Slashdot post exploring ways to fund a possible update.

At the time Slashdot's editors noted that "One of the largest complaints about Linux is that there is a lack of high-profile documentation. It would be sad if this publication were not made simply because of the lack of funds (which some people would see as a lack of interest) necessary to complete it." But that's how things seemed to end up — until Satchell suddenly reappeared to share this update from 2022: When I was released from my last job, I tried retirement. Wasn't for me. I started going crazy with nothing significant to do. So, going through old hard drives (that's another story), I found the original manuscript files, plus the page proof files, for that two-decade-old book. Aha! Maybe it's time for an update. But how to keep it fresh, as Torvalds continues to release new updates of the Linux kernel?

Publish it on the Web. Carefully.

After four months (and three job interviews) I have the beginnings of the second edition up and available for reading. At the moment it's an updated, corrected, and expanded version of the "gray matter", the exposition portions of the first edition....

The URL for the alpha-beta version of this Web book is satchell.net/ipstacks for your reading pleasure. The companion e-mail address is up and running for you to provide feedback. There is no paywall.

But there's also an ingenious solution to the problem of updating the text as the code of the kernel keeps changing: Thanks to the work of Professor Donald Knuth (thank you!) on his WEB and CWEB programming languages, I have made modifications, to devise a method for integrating code from the GIT repository of the Linux kernel without making any modifications (let alone submissions) to said kernel code. The proposed method is described in the About section of the Web book. I have scaffolded the process and it works. But that's not the hard part.

The hard part is to write the commentary itself, and crib some kind of Markup language to make the commentary publishing quality. The programs I write will integrate the kernel code with the commentary verbiage into a set of Web pages. Or two slightly different sets of web pages, if I want to support a mobile-friendly version of the commentary.

Another reason for making it a web book is that I can write it and publish it as it comes out of my virtual typewriter. No hard deadlines. No waiting for the printers. And while this can save trees, that's not my intent. The back-of-the-napkin schedule calls for me to to finish the expository text in September, start the Python coding for generating commentary pages at the same time, and start the writing the commentary on the Internet Control Message Protocol in October. By then, Linus should have version 6.0.0 of the Linux kernel released.

I really, really, really don't want to charge readers to view the web book. Especially as it's still in the virtual typewriter. There isn't any commentary (yet). One thing I have done is to make it as mobile-friendly as I can, because I suspect the target audience will want to read this on a smartphone or tablet, and not be forced to resort to a large-screen laptop or desktop. Also, the graphics are lightweight to minimize the cost for people who pay by the kilopacket. (Does anywhere in the world still do this? Inquiring minds want to know.)

I host this web site on a Protectli appliance in my apartment, so I don't have that continuing expense. The power draw is around 20 watts. My network connection is AT&T fiber — and if it becomes popular I can always upgrade the upstream speed.

The thing is, the cat needs his kibble. I still want to know if there is a source of funding available.

Also, is it worthwhile to make the pages available in a zip file? Then a reader could download a snapshot of the book, and read it off-line.

Perl

'Massive' Ongoing Changes to Perl Help It Move Beyond Its Unix Roots (stackoverflow.blog) 74

Perl's major version number hasn't changed since 1994, notes a new blog post at Stack Overflow by Perl book author Dave Cross. Yet the programming language has still undergone "massive changes" between version 5.6 (summer of 2000) and version 5.36 (released this May).

But because the Perl development strives for backwards compatibility, "many new Perl features are hidden away behind feature guards and aren't available unless you explicitly turn them on...." You're no doubt familiar with using print() to display data on the console or to write it to a file. Perl 5.10 introduced the say() command which does the same thing but automatically adds a newline character to the output. It sounds like a small thing, but it's surprisingly useful. How many times do you print a line of data to a file and have to remember to explicitly add the newline? This just makes your life a little bit easier....

Some of the improvements were needed because in places Perl's Unix/C heritage shows through a little more than we'd like it to in the 21st century. One good example of this is bareword filehandles... It is a variable. And, worst than that, it's a package variable (which is the closest thing that Perl has to a global variable)... [But] for a long time (back to at least Perl 5.6), it has been possible to open filehandles and store them in lexical variables... For a long time, Perl's standard functions for dealing with dates and times were also very tied to its Unix roots. You may have seen code like this:

my @datetime = localtime();

The localtime() function returns a list of values that represent the various parts of the current local time... Since Perl 5.10, the standard library has included a module called Time::Piece. When you use Time::Piece in your code, it overrides localtime() and replaces it with a function that returns an object that contains details of the current time and date. That object has a strftime() method... And it also has several other methods for accessing information about the time and date [including a method called is_leap_year]... Using Time::Piece will almost certainly make your date and time handling code easier to write and (more importantly) easier to read and understand....

In most languages you'd have a list of variable names after the subroutine name and the parameters would be passed directly into those. Well, as of version 5.36 (which was released earlier this summer) Perl has that too. You turn the feature on with use feature 'signatures'.... Subroutine signatures have many other features. You can, for example, declare default values for parameters.

And new features possibly coming soon incude a new object-oriented programming framework named Corinna being written into the Perl core. "Beyond that, the Perl development team have their eye on a major version number bump."

And to avoid confusion with Raku -- the offshoot programming language formerly known as Perl 6 -- the next major version of Perl will be Perl 7.
GNU is Not Unix

GNU Grep 3.8 Starts Issuing Warnings About Using Egrep and Fgrep (phoronix.com) 86

After 104 commits from six different people, GNU grep was released Saturday, reports Phoronix.

The biggest change? "It's now made more clear that if you are still relying on the egrep and fgrep commands, it's past due for switching to just grep with the appropriate command-line arguments." The egrep and fgrep commands have been deprecated since 2007. Beginning with GNU Grep 3.8 today, calling these commands will now issue a warning to the user that instead they should use grep -E and grep -F, respectively.

Eventually, GNU Grep will drop the egrep / fgrep commands completely but there doesn't seem to be a firm deadline yet for when that removal will happen.

From grep's updated manual: 7th Edition Unix had commands egrep and fgrep that were the counterparts of the modern 'grep -E' and 'grep -F'. Although breaking up grep into three programs was perhaps useful on the small computers of the 1970s, egrep and fgrep were not standardized by POSIX and are no longer needed. In the current GNU implementation, egrep and fgrep issue a warning and then act like their modern counterparts; eventually, they are planned to be removed entirely.

If you prefer the old names, you can use use your own substitutes, such as a shell script...

Other notable changes from the release announcement:
  • The confusing GREP_COLOR environment variable is now obsolescent. Instead of GREP_COLOR='xxx' use GREP_COLORS='mt=xxx'
  • Regular expressions with stray backslashes now cause warnings

Wine

Wine 7.16 Brings Fixes for Saint's Row, Metal Gear, and Star Citizen (neowin.net) 28

It's the 29-year-old "compatibility layer" that lets Windows software run on Unix-like systems (including games). And Neowin reports that Wine's latest version has "meaningful fixes" for Steam Deck, HoloISO, and Chimera OS gamers.

Slashdot reader segaboy81 writes: Saint's Row players rejoice! Wine 7.16 has been released and ships with fixes for this, Metal Gear Solid and Star Citizen. [As well as Ragnarok Online.] Though Deck owners may have to wait for these changes to be merged upstream.
"There are a lot of fixes for other non-gaming Windows-y stuff," Neowin adds, "and you can check out those changes at WineHQ."

Slashdot Top Deals