Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
We have been getting stories for a while about how JWST observations don't line up with the current Big Bang timelines. I'm certain there will be "Big Bang Band Aid" theories at least until the current crop of Astrophysicists who built their entire career on the semi-biblical "In the Beginning..." theory of where it all started have, themselves, died off. Meanwhile, there is also never a shortage of contrarian theories out there, and one of them is starting to get some support from the JWST observations of the "deep past" - which, maybe, isn't so deep after all.
Current theories for the redshift observed in more distant galaxies rely on the postulate: "photons travel at the speed of light and arrive unchanged at their destination, exactly when they left their source, from their perspective."
There are other theories. One, in particular, explains the observed redshifts with the idea that photons "get tired" on their Billions of light year journeys and lose a little frequency / gain a little wavelength along the way. JWST observations that are seeing mature galaxies back at, and before, the previously presumed start of "it all" may align better with the less well developed tiring photon theory than they do with the Big Bang. Not only does the "tired light" theory directly explain red-shift, but the observations of wavelength shift with respect to galactic rotation seem to be lining up better with "tired light" than "Big Bang," too...
Around the same time, Fritz Zwicky, a well-known astronomer, came up with a different idea.
He proposed that the redshift we see in distant galaxies — basically a shift in the light spectrum towards red — might not be because those galaxies are speeding away.
Instead, he thought that the light photons from these galaxies could be losing energy, or "tiring out," as they travel through space.
This energy loss could make it look like the farther galaxies are moving away from us faster than they actually are.
"[...] But the confidence of some astronomers in the Big Bang theory started to weaken when the powerful James Webb Space Telescope (JWST) saw first light."
What if the Universe isn't expanding at all, but instead is quite a bit bigger than we have been guessing it is?
A Tesla Semi's fiery crash on California's Interstate 80 turned into a high-stakes firefight, as emergency responders struggled to douse flames ignited by the vehicle's lithium-ion battery pack:
The National Transportation Safety Board (NTSB) reported that CAL FIRE had to use a jaw-dropping 50,000 gallons of water, alongside fire-retardant airdrops, to put out the blaze. The crash and subsequent fire shut down eastbound lanes of I-80 for a staggering 15 hours, as reported by Breitbart.
The Tesla electric big rig, driven by a Tesla employee, veered off the road on August 19, smashing into a traffic post and a tree before careening down a slope and igniting a post-crash inferno. Fortunately, no one was injured. However, the NTSB's report sheds light on the difficulty of extinguishing fires in electric vehicles. Tesla's infamous "thermal runaway" effect—the tendency of lithium-ion batteries to reignite hours after being "put out"—was a constant concern, but the semi's battery system stayed under control this time.
[...] The blaze and the hazardous materials response that followed created chaos along I-80, a key artery linking Northern California with Nevada. Traffic was rerouted, and the full shutdown stretched late into the evening, causing significant delays.
Previously:
Genetic information and ancestry reports of U.S. citizens were among the information stolen in the cyber attack:
23andMe proposes to compensate millions of customers affected by a data breach on the company's platform, offering $30 million as part of the settlement, along with providing users access to a security monitoring system.
The genetic testing service will pay the amount to approximately 6.4 million American users, according to a proposed class action settlement filed in the U.S. District Court for the Northern District of California on Sept. 12. Personal information was exposed last year after a hacker breached the website's security and posted critical user data for sale on the dark web.
[...] According to the settlement proposal, users will be sent a link where they can delete all information related to 23andMe.
[...] In an emailed statement to The Epoch Times, 23andMe Communications Director Andy Kill said that out of the $30 million aggregate amount, "roughly $25 million of the settlement and related legal expenses are expected to be covered by cyber insurance coverage."
Also at USA Today, Fox Business and The Verge.
Previously:
If you know what a pager is, you're OLD. Or are a Hezbollah terrorist. According to the Washington Post (paywalled), Wall Street Journal, CNN, and just about every outlet, about a dozen people were killed and thousands reportedly injured.
See, kid, back in the stone age we didn't have supercomputers in our pockets acting as telephones, we only had telephones. They were a permanent part of a room. If you weren't home, nobody could call you. But if you were a physician, people need to call you. So they had "pagers", also called "beepers," that alerted you to call the office.
They're not supposed to blow up. This is James Bond stuff. Since the Israelis can listen in to every cell phone call in the area, Hezbollah needed a secure way to communicate, so used pagers. But who loaded them with explosives? How? Pagers weren't big, the explosive must be high tech.
What was 007's tech guy's name?
I remember vague stories heard in the 90s about "viruses" that would take over your computer, then spin your hard drive so fast that it broke.
Then there was the history of stuxnet and the Iran uranium centrifuges.
Just now I saw this story about pagers (of Hezbollah members) exploding https://www.bbc.com/news/articles/cd7xnelvpepo
I suspect a virus that does something to batteries, rather than traditional explosives.
if my suspicion is true... are we looking at a future where high-density batteries are too dangerous for regular people?
The availability of large datasets which are used to train LLMs enabled their rapid development. Intense competition among organizations has made open-sourcing LLMs an attractive strategy that's leveled the competitive field:
Large Language Models (LLMs) have not only fascinated technologists and researchers but have also captivated the general public. Leading the charge, OpenAI ChatGPT has inspired the release of numerous open-source models. In this post, I explore the dynamics that are driving the commoditization of LLMs.
Low switching costs are a key factor supporting the commoditization of Large Language Models (LLMs). The simplicity of transitioning from one LLM to another is largely due to the use of a common language (English) for queries. This uniformity allows for minimal cost when switching, akin to navigating between different e-commerce websites. While LLM providers might use various APIs, these differences are not substantial enough to significantly raise switching costs.
In contrast, transitioning between different database systems involves considerable expense and complexity. It requires migrating data, updating configurations, managing traffic shifts, adapting to different query languages or dialects, and addressing performance issues. Adding long-term memory [4] to LLMs could increase their value to businesses at the cost of making it more expensive to switch providers. However, for uses that require only the basic functions of LLMs and do not need memory, the costs associated with switching remain minimal.
[...] Open source models like Llama and Mistral allow multiple infrastructure providers to enter the market, enhancing competition and lowering the cost of AI services. These models also benefit from community-driven improvements, which in turn benefits the organizations that originally developed them.
Furthermore, open source LLMs serve as a foundation for future research, making experimentation more affordable and reducing the potential for differentiation among competing products. This mirrors the impact of Linux in the server industry, where its rise enabled a variety of providers to offer standardized server solutions at reduced costs, thereby commoditizing server technology.
Previously:
https://dm319.github.io/pages/2024_09_09_hp12_comma.html
The HP-12c is probably the most iconic financial calculator. Not being in finance myself, and in fact being terribly bad at that kind of thing, I never quite got the purpose of these special-purpose devices. My ignorance came to a halt due to an unfortunate combination of my fixed-rate mortgage period ending and Liz Truss happening, and I was driven to a sudden keen interest in the 'time value of money' (TVM) calculation.
...
Earlier this year I came across a fairly benign-looking reddit post describing some difficulty changing the decimal point to a decimal comma on a new Brazilian-bought HP-12c. Most of the replies were along the lines of 'you're holding it wrong', but something caught my attention. They weren't the only one, and not only had someone else had the same experience, I was pointed to numerous Amazon reviews describing similar woes.
To find out for myself, I VPN'd myself over to the Brazilian Amazon and started reading (with the assistance of my phone and google translate) reviews. What I saw was quite consistent - people couldn't change the point to the comma, and the calculator also failed on something called the internal rate of return (IRR) calculation.
I was curious, were these a different version of the HP-12c? Was it a fake? It is generally accepted that the HP-12c (and to some degree the related HP-12c platinum) will return exactly the same results no matter. Why would it otherwise? It was at this point I needed help, and a very kind Brazilian redditor did the work needed to run the aforementioned TVM tests as a forensic tool. What they found was a set of results entirely different to not just the regular HP-12c, but to any other financial calculator we had tested.
One of the most recent Ig Nobel winners that caught my eye was: Saul Justin Newman, for detective work in discovering that many of the people famous for having the longest lives lived in places that had lousy birth-and-death recordkeeping. He found that almost all data on the reported oldest people in the world are staggeringly wrong, as high as 82% incorrect, and he says, "If equivalent rates of fake data were discovered in any other field... a major scandal would ensue. In demography, however, such revelations seem to barely mention citation."
The Conversation also picked up on this and interviewed him about it:
I started getting interested in this topic when I debunked a couple of papers in Nature and Science about extreme ageing in the 2010s. In general, the claims about how long people are living mostly don't stack up. I've tracked down 80% of the people aged over 110 in the world (the other 20% are from countries you can't meaningfully analyse). Of those, almost none have a birth certificate. In the US there are over 500 of these people; seven have a birth certificate. Even worse, only about 10% have a death certificate.
The epitome of this is blue zones, which are regions where people supposedly reach age 100 at a remarkable rate. For almost 20 years, they have been marketed to the public. They're the subject of tons of scientific work, a popular Netflix documentary, tons of cookbooks about things like the Mediterranean diet, and so on.
Okinawa in Japan is one of these zones. There was a Japanese government review in 2010, which found that 82% of the people aged over 100 in Japan turned out to be dead. The secret to living to 110 was, don't register your death.
[...] Regions where people most often reach 100-110 years old are the ones where there's the most pressure to commit pension fraud, and they also have the worst records. For example, the best place to reach 105 in England is Tower Hamlets. It has more 105-year-olds than all of the rich places in England put together. It's closely followed by downtown Manchester, Liverpool and Hull. Yet these places have the lowest frequency of 90-year-olds and are rated by the UK as the worst places to be an old person.
[...] Longevity is very likely tied to wealth. Rich people do lots of exercise, have low stress and eat well. I just put out a preprint analysing the last 72 years of UN data on mortality. The places consistently reaching 100 at the highest rates according to the UN are Thailand, Malawi, Western Sahara (which doesn't have a government) and Puerto Rico, where birth certificates were cancelled completely as a legal document in 2010 because they were so full of pension fraud. This data is just rotten from the inside out.
Do you think the Ig Nobel will get your science taken more seriously?
I hope so. But even if not, at least the general public will laugh and think about it, even if the scientific community is still a bit prickly and defensive. If they don't acknowledge their errors in my lifetime, I guess I'll just get someone to pretend I'm still alive until that changes.
The next encrypted phone service have fallen after Encrochat, Sky ECC and Anom. This time it's probably "Ghost".
A press conference will be held on Wednesday 18 September 2024 to announce a major action against an encrypted communication platform used for criminal activities, such as large-scale drugs trafficking, homicides and money laundering.
This operation is the latest sophisticated effort to date to disrupt the activities of high-risk criminal organisations operating from all four corners of the world.
Details
Speakers:Europol
French National Gendarmerie (Gendarmerie Nationale)
United States' Federal Bureau of Investigation (FBI)
Australian Federal Police (AFP)
Irish An Garda Síochána
Royal Canadian Mounted Police (RCMP)Countries and organisations involved:
Australia, Canada, France, Iceland, Ireland, Italy, the Netherlands, Sweden, United States, Europol, Eurojust
Prices of emissions-free trucks need to fall by as much as half to make them an affordable alternative to diesel models, a study by consultancy firm McKinsey published on Wednesday said, a necessary step to help achieve European Union climate targets:
Less than 2% of the EU's heavy freight vehicles are now electric and hydrogen-powered. To meet the bloc's carbon emission reduction targets, the share should rise to 40% of new sales by 2030, the study released before the IAA Transportation 2024 truck show in Hanover showed.
Currently production costs for electric trucks are 2.5-3 times higher than for diesel ones, the study said, and with logistics firms unwilling to accept higher costs for emissions-free freight, that goal is still distant.
To overcome that, prices for new electric trucks should be no more than 30% higher than for diesel models, McKinsey said, which would require a technological leap in batteries.
For successful implementation of the EU's CO2 strategy, a 25% cut in charging costs is also needed, the study showed, with 900,000 private charging points to be installed in Europe by 2035, which would require a $20 billion investment.
Arthur T Knackerbracket has processed the following story:
Cybercriminals closed some schools in America and Britain this week, preventing kindergarteners in Washington state from attending their first-ever school day and shutting down all internet-based systems for Biggin Hill-area students in England for the next three weeks.
On Sunday, Highline Public Schools, a Seattle-area school district that serves more than 17,000 students from pre-K through high school, alerted its parents and students that all schools, along with activities, athletics and meetings planned for Monday, had been canceled.
"We have detected unauthorized activity on our technology systems and have taken immediate action to isolate critical systems," according to a notice posted on the district's website.
Upon finding the digital intruders on the network, the district called in third-party infosec experts, along with US federal and state law enforcement, to help restore the systems, we're told.
[...] No criminal group has claimed responsibility for the Highline breach, though the school closures follow a ransomware infection that snarled traffic at the Seattle-Tacoma International Airport in late August.
[...] Meanwhile, in the UK, Charles Darwin School sent home a letter with all of its students on September 6, telling parents and caregivers that the "IT issues" it had been experiencing were "worse than hoped." In fact, they were due to a ransomware attack.
Charles Darwin has 1,320 secondary and sixth-form students in Bromley, England.
The Biggin Hill school would be closed between September 9 and September 11 as IT admins wiped all of the staff devices and teachers reorganized all of their lessons, according to headteacher Aston Smith.
Internet, email, and other school systems will be knocked out for an estimated three weeks, he added.
[...] Black Suit, believed to be an offshoot of the now defunct Conti ransomware gang, has claimed to be behind the Charles Darwin School attack. In a post on the criminals' dark-web blog, they say they stole 200 GB of data, including user, business data, employee, student and financial information.
[...] "Unfortunately, cyber-attacks like this are happening more frequently despite having the latest security measures in place," he said. "Our understanding of our situation is that it is similar to what was experienced by the NHS, Transport for London, National Rail, other schools and public sector departments."
[...] "There is no honor amongst the ransomware gangs attacking schools in Washington state and the UK," Semperis principal technologist Sean Deuby told The Register, adding that schools are more vulnerable targets because of their smaller IT budgets and fewer defensive resources. "Attacking just before the first day of school for young kindergartners demonstrates their amorality."
While the Seattle-area district hasn't called the incident ransomware, "reading between the lines on these attacks leads me to believe that the schools were hit by ransomware," Deuby opined.
[...] "Most schools today use Office 365 but still depend upon their on-premises identity system, Active Directory, for its users," Deuby said, adding that this makes exploiting Microsoft AD vulnerabilities more enticing to criminals.
While there's "no silver bullet" to solve schools' security challenges, he suggests working with their IT providers to identify critical services "such as AD that are single points of failure."
"If critical services go down, school stops, and the school buses don't roll," Deuby noted. "Have a plan for what to do. This doesn't have to be perfect but think now about what to do if email goes away or a teacher portal is locked."
Arthur T Knackerbracket has processed the following story:
For some context, Loongson is a Chinese fabless chip company grown out of the country's state-sponsored efforts to develop domestic CPUs. Its 3A6000 chip launched in late 2023 is claimed to match AMD's Zen 3 and Intel's Tiger Lake architectures from 2020. While the company has mostly played in the CPU space until now, the GPU offerings represent a new push.
Their current model, the 9A1000, is a pretty tame GPU aimed at budget systems and low-end AI tasks. But the 9A2000 is allegedly taking things to the next level.
According to Loongson's Chairman and General Manager Hu Weiwu, the 9A2000 delivers performance up to 10 times higher than its predecessor. He claimed it should be "comparable to Nvidia RTX 2080," according to ITHome.
[...] That said, Loongson has another card to play. At the same briefing, Weiwu also provided a teaser for their next-gen 3B6600 CPU, making some lofty performance claims about its architecture. He touted "significant" changes under the hood that should elevate its single-threaded muscle to "world-leading" levels, according to another ITHome report.
Previous leaks suggest this processor will pack eight LA864 cores clocked at a stout 3GHz, along with integrated LG200 graphics.
As for the launch, Weiwu gave a tentative first half of 2025 target for initial production, with mass availability hopefully following in the second half of next year.
Loongson has typically played more of a supporting role in the CPU arena and is yet to make a dent outside of China. But if this 3B6600 chip can truly hang with the heavyweights of x86 and Arm in per-core performance, it would mark a major step up for the company.
Arthur T Knackerbracket has processed the following story:
Neandertals traveled at least two evolutionary paths on their way to extinction around 40,000 years ago, a new study suggests.
Whether classified as a separate species or a variant of Homo sapiens, Neandertals have typically been viewed as a genetically consistent population. But an adult male’s partial skeleton discovered in France contains genetic clues to a Neandertal line that evolved apart from other European Neandertals for around 50,000 years, nearly up to the time these close relatives of H. sapiens died out, researchers say.
The possibility of a long-lasting, isolated Neandertal population in southwestern Europe supports the idea that these hominids “very likely had their own, complex evolutionary history, with local extinctions and migrations, just like us,” says paleogeneticist Carles Lalueza-Fox of the Institute of Evolutionary Biology in Barcelona, who did not participate in the new study.
A team led by archaeologist Ludovic Slimak of Université Toulouse III – Paul Sabatier in France and population geneticist Martin Sikora of the University of Copenhagen nicknamed the French Neandertal discovery Thorin, after a character in J.R.R. Tolkien’s book The Hobbit. Thorin’s remains, discovered at the entrance of Grotte Mandrin rock shelter in 2015, are still being excavated.
Several dating methods applied to teeth from Thorin and animals buried near his body, as well as Thorin’s position in Grotte Mandrin sediment, indicate that this Neandertal lived between around 50,000 and 42,000 years ago, Slimak’s and Sikora’s group reports September 11 in Cell Genomics.
Molecular segments representing about 65 percent of Thorin’s genome were recovered from a molar, Sikora says. Thorin’s DNA was then compared with DNA previously extracted from other Neandertals, ancient H. sapiens and present-day people.
Arrays of gene variants in Thorin’s DNA more closely align with the previously reported DNA structure of Neandertals that lived around 105,000 years ago, versus Neandertals dating to around 50,000 to 40,000 years ago. Yet analyses of carbon and other diet-related chemical elements in Thorin’s bones and teeth suggest that he lived during an ice age, which did not develop in Europe until about 50,000 years ago.
Thorin also inherited from his parents an unusually high percentage of DNA segments containing consecutive pairs of identical gene variants. Reduced genetic variation of that kind, previously found in Siberian Neandertals, reflects mating among close relatives in a small population (SN: 10/19/22).
Taken together, the genetic evidence fits a scenario in which Thorin belonged to a Neandertal lineage that split from other European Neandertals around 105,000 years ago, the researchers say. For roughly the next 50,000 years, they suspect, Thorin’s lineage consisted of small networks of closely related communities that exchanged mates.
Reasons why those ancient groups avoided mating with other Neandertals in the region, possibly related to language or cultural differences, are unclear, Sikora says.
[...] “If Thorin is really 50,000 years old, this would be an amazing finding showing a strong genetic structure in late Neandertals,” says paleogeneticist Cosimo Posth of the University of Tübingen in Germany. But, he says, further excavation and research at Grotte Mandrin will need to confirm when Thorin lived.
Researchers found Thorin’s remains in a small, natural depression on the rock shelter floor. Slimak’s and Sikora’s group cannot yet say how the body got there or whether it originated in older sediment. An older date for the partial skeleton would indicate, less surprisingly, that Thorin belonged to an isolated population that petered out quickly.
Long-term isolation would have resulted in Thorin inheriting a greater number of short DNA segments containing identical gene pairs than reported in the new study, Lalueza-Fox says. Isolating more of Thorin’s DNA or collecting genetic remnants from other fossil members of his lineage will clarify the evolutionary story of these close-knit Neandertals, he says.
L. Slimak et al. Long genetic and social isolation in Neanderthals before their extinction. Cell Genomics. Published online September 11, 2024. doi: 10.1016/j.xgen.2024.100593.
https://arstechnica.com/information-technology/2024/09/my-dead-father-is-writing-me-notes-again/
Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.
Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone's handwriting very accurately if specially trained to do so.
[...] I admit that copying someone's handwriting so convincingly could bring dangers. I've been warning for years about an upcoming era where digital media creation and mimicry is completely and effortlessly fluid, but it's still wild to see something that feels like magic work for the first time.
[...] As a daily tech news writer, I keep an eye on the latest innovations in AI image generation. Late last month while browsing Reddit, I noticed a post from an AI imagery hobbyist who goes by the name "fofr"—pronounced "Foffer," he told me, so let's call him that for convenience. Foffer announced that he had replicated J.R.R. Tolkien's handwriting using scans found in archives online .
[...] Foffer's breakthrough was realizing that Flux can be customized using a special technique called "LoRA" (short for "low-rank adaptation") to imitate someone's handwriting in a very realistic way. LoRA is a modular method of fine-tuning Flux to teach it new concepts that weren't in its original training dataset—the initial set of pictures and illustrations its creator used to teach it how to synthesize images.
[...] "I don't want to encourage people to copy other's handwriting, especially signatures," Foffer told me in an interview the day he took the Tolkien model down. But said he would help me attempt to apply his technique to a less famous individual for an article, telling me how I could inexpensively train my own image synthesis model on a cloud AI hosting site called Replicate. "I think you should try it. I think you'll be surprised how fun and easy it is," he said.
[...] My dad was an electronics engineer, and he had a distinctive way of writing in all-caps that was instantly recognizable to me throughout his life. [...] I began the task of assembling a "dad's uppercase" dataset.
[...] using neural networks to model handwriting isn't new. In January 2023, we covered a web app called Calligrapher.ai that can simulate dynamic handwriting styles (based on 2013 research from Alex Graves). A blog post from 2016 written by machine learning scientist Sam Greydanus details another method of creating AI-generated handwriting, and there's a company called Handwrytten that sells robots that write actual letters, with pen on paper, using simulated human handwriting for marketing purposes.
What's new in this instance is that we're using Flux, a free open-weights AI model anyone can download or fine-tune, to absorb and reproduce handwriting styles.
[...] I felt joy to see newly synthesized samples of Dad's handwriting again. They read to me like his written voice, and I can feel the warmth just seeing the letters. I know it's not real and he didn't really write it, so I personally find it fun.
Members of the North Korean hacker group Lazarus posing as recruiters are baiting Python developers with coding test project for password management products that include malware.
The attacks are part of the 'VMConnect campaign' first detected in August 2023, where the threat actors targeted software developers with malicious Python packages uploaded onto the PyPI repository.
According to a report from ReversingLabs, which has been tracking the campaign for over a year, Lazarus hackers host the malicious coding projects on GitHub, where victims find README files with instructions on how to complete the test.
The directions are meant to provide a sense professionalism and legitimacy to the whole process, as well as a sense of urgency.
ReversingLabs found that the North Koreans impersonate large U.S. banks like Capital One to attract job candidates, likely offering them an enticing employment package.
Further evidence retrieved from one of the victims suggests that Lazarus actively approaches their targets over LinkedIn, a documented tactic for the group.
The hackers direct candidates to find a bug in a password manager application, submit their fix, and share a screenshot as proof of their work.
The README file for the project instruct the victim first to execute the malicious password manager application ('PasswordManager.py') on their system and then start looking for the errors and fixing them.
That file triggers the execution of a base64 obfuscated module hidden in the'_init_.py' files of the 'pyperclip' and 'pyrebase' libraries.
The obfuscated string is a malware downloader that contacts a command and control (C2) server and awaits for commands. Fetching and running additional payloads is within its capabilities.
To make sure that the candidates won't check the project files for malicious or obfuscated code, the README file require the task to be completed quickly: five minutes for building the project, 15 minutes to implement the fix, and 10 minutes to send back the final result.
This is supposed to prove the developer's expertise in working with Python projects and GitHub, but the goal is to make the victim skip any security checks that may reveal the malicious code.
Article: https://dailynous.com/2024/09/13/journal-publishers-sued-on-antitrust-grounds/
From Dailynous:
Lucina Uddin, a professor of psychology at the University of California, Los Angeles, is the named plaintiff in an antitrust lawsuit against six publishers of academic journals: Elsevier, Wolters Kluwer, Wiley, Sage, Taylor and Francis Group, and Springer. The lawsuit accuses the publishers of collusion in violation of Section 1 of the Sherman Act, stating that they "conspired to unlawfully appropriate billions of dollars that would have otherwise funded scientific research."
The plaintiff "seeks to recover treble damages, an injunction, and other relief."
The lawsuit, filed in federal district court in New York, can be read in its entirety here. The early paragraphs in which the publishers' "scheme" is described are a good read:
The Publisher Defendants' Scheme has three primary components. First, the Publisher Defendants agreed to not compensate scholars for their labor, in particular not to pay for their peer review services (the "Unpaid Peer Review Rule"). In other words, the Publisher Defendants agreed to fix the price of peer review services at zero. The Publisher Defendants also agreed to coerce scholars into providing their labor for nothing by expressly linking their unpaid labor with their ability to get their manuscripts published in the Publisher Defendants' journals. In the "publish or perish" world of academia, the Publisher Defendants essentially agreed to hold the careers of scholars hostage so that the Publisher Defendants could force them to provide their valuable labor for free.
Second, the Publisher Defendants agreed not to compete with each other for manuscripts by requiring scholars to submit their manuscripts to only one journal at a time (the "Single Submission Rule"). The Single Submission Rule substantially reduces competition among the Publisher Defendants, substantially decreasing incentives to review manuscripts promptly and publish meritorious research quickly. The Single Submission Rule also robs scholars of negotiating leverage they otherwise would have had if more than one journal offered to publish their manuscripts. Thus, the Publisher Defendants know that if they offer to publish a manuscript, the submitting scholar has no viable alternative and the Publisher Defendant can then dictate the terms of publication.
Third, the Publisher Defendants agreed to prohibit scholars from freely sharing the scientific advancements described in submitted manuscripts while those manuscripts are under peer review, a process that often takes over a year (the "Gag Rule"). From the moment scholars submit manuscripts for publication, the Publisher Defendants behave as though the scientific advancements set forth in the manuscripts are their property, to be shared only if the Publisher Defendants grant permission. Moreover, when the Publisher Defendants select manuscripts for publication, the Publisher Defendants will often require scholars to sign away all intellectual property rights, in exchange for nothing. The manuscripts then become the actual property of the Publisher Defendants, and the Publisher Defendants charge the maximum the market will bear for access to that scientific knowledge.