Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The shambling corpse of Steve Jobs lumbers forth, heeding not the end of October! How will you drive him away?

  • Flash running on an Android phone, in denial of his will
  • Zune, or another horror from darkest Redmond
  • Newton, HyperCard, or some other despised interim Apple product
  • BeOS, the abomination from across the sea
  • Macintosh II with expansion slots, in violation of his ancient decree
  • Tow his car for parking in a handicap space without a permit
  • Oncology textbook—without rounded corners
  • Some of us are still in mourning, you insensitive clod!

[ Results | Polls ]
Comments:23 | Votes:55

posted by janrinok on Sunday September 22, @08:13PM   Printer-friendly

NHS scientists find new blood group solving 50-year mystery:

Thousands of lives could be saved around the world after NHS scientists discovered a new blood group system - solving a 50-year-old mystery.

The research team, led by NHS Blood and Transplant (NHSBT) scientists in South Gloucestershire and supported by the University of Bristol, found a blood group called MAL.

They identified the genetic background of the previously known AnWj blood group antigen, which was discovered in 1972 but unknown until now after this world-first test was developed.

Senior research scientist at NHSBT Louise Tilley said the discovery means better care to rare patients can be offered.

Ms Tilley, who has worked on the project for 20 years, told the BBC it is "quite difficult to a put a number" on how many people will benefit from the test. However, the NHSBT is the last resort for about 400 patients across the world each year.

Everyone has proteins outside their red blood cells known as antigens, but a small number might lack them.

Using genetic testing, NHSBT's International Blood Group Reference Laboratory in Filton have for the first time developed a test that will identify patients missing this antigen.

The test could prove a lifesaver for those who would react against a blood transfusion, and will make it easier to find potential blood developers for this rare blood type.

Philip Brown, who works at the laboratory, was diagnosed with a form of leukaemia about 20 years ago.

He had blood transfusions and a bone marrow transplant - without that, he would have died.

"Anything we can do to make our blood much safer and a better match for patients is a definite step in the right direction," he said.

Head of the laboratory Nicole Thornton said: "Resolving the genetic basis for AnWj has been one of our most challenging projects.

"There is so much work that goes into proving that a gene does actually encode a blood group antigen, but it is what we are passionate about, making these discoveries for the benefit of rare patients around the world.

"Now genotyping tests can be designed to identify genetically AnWj-negative patients and donors.

"Such tests can be added to the existing genotyping platforms."


Original Submission

posted by hubie on Sunday September 22, @03:22PM   Printer-friendly
from the ein-klein-Verlorenemusik dept.

A music historian at the Austrian state archives, Paul Duncan, has completed the final component of an investigation into a lost Wolfgang Amadeus Mozart (1756–1791) piece. It was determined the authentic Mozart manuscript originated from a Vienna-based copyist named Johannes Traeg and was written by Mozart when still a teen.

An obscure piano manuscript that had been ignored for centuries is now believed to have been authored by one of the world's most famous composers: Wolfgang Amadeus Mozart.

The sensational discovery was revealed by Austrian officials on Sept. 8, following an extensive investigation into the document.

The manuscript has been housed in an state archive since 2005 but was donated by a private collector to the Styrian Music Association in 1877.

Previously:
(2016) Lost Mozart-Salieri Composition Performed


Original Submission

posted by janrinok on Sunday September 22, @10:34AM   Printer-friendly
from the Better-on-time-than-never dept.

By committing the Kconfig knobs, Linux is now capable of being configured into a Real-time Operating System. The result, due to an ongoing effort of just over 20 years, now allows for the all developers and users to utilize real-time computing without having to target a completely separate OS. Embedded systems and live processing will likely see more immediate improvements. This support is limited to X86, X86_64, ARM64, and RISCV and only capable of hard real time on hardware that supports it. However, the new competition and interest will likely spur on more developments in Real-Time Computing the future.

One final note is that enabling PREEMPT_RT is not a panacea leading to better performance. Real time computing and real-time OSes sacrifice maximum throughput for guaranteed latency with minimal jitter. Real time does not mean "as fast as possible." Real time means "not too slow." In the wrong situation, it can actually make your performance worse.


Original Submission

posted by janrinok on Sunday September 22, @05:47AM   Printer-friendly

U.S. finds the golden hydrogen in this region: trillions of dollars of this futuristic energy here:

The search for the energy of the future continues in all parts of the world, from Antarctica to outer space (yes, also outside our planet). However, it seems that this new race has been won by America, where we have found what all countries are looking for: this is the region that is going to earn trillions of dollars for the energy of the future: the enigmatic golden hydrogen.

The United States has recently, as you know discovered rich hydrogen resources within the country that would alter the current energy situation. The use of hydrogen is gradually becoming an indispensable measure in the shift towards the non-utilization of fossil resources.

The DoE has noted massive under ground pool of hydrogen energy in various places across the country. Hydrogen, the first, lightest element on the periodic table of elements, is the most abundant element in the universe and yet, the discovery of hydrogen in concentrated form in this planet has been a task.

New assessments hence show the prospect that these domestic supplies can be used to fuel automobiles, factories and even electrification. Promising innovation and development in infrastructure, hydrogen is capable of powering America's tomorrow.

Scientists in United States have found that there exist very huge resources of hydrogen gaseous fuel trapped beneath the surface of the earth, that can sustain all consumers for hundreds of years. The Department of Energy has claimed that the resources of hydrogen in some parts of the country to be in trillions of dollars.

This natural hydrogen was generated through geo-chemical processes, it has been confined within underlying rocks and sediments. Though hydrogen is present in immensely massive quantities in the universe, localized, denser sources here on Earth are relatively scarce.

If one uses the correct technology then one can know that this enormous source of clean energy could fuel the society for ages. A key advantage here is that the hydrogen does not have to be produced, which means there is no environmental and economic cost associated with its production.

Accessing these hydrogen resources is one of the prospects for shifting the oil and gas industry to a new level of energy production. However, it must be noted that there are technical issues that can be associated with scaling of hydrogen release and the efficient harvesting of hydrogen in a safe and sustainable manner.

Signs of huge deposits of hydrogen have known to exist underground in different sedimentary basins in the United States of America. They contain the world's largest deposits mainly in the western part of the USA in Texas, New Mexico, Utah and Colorado.

These areas contain trillions of cubic feet of hydrogen gas dislocated in rock structures at depths of the earth crust. These hydrogen sources were generated from deposits of natural gas that over millions of years interacted with water and rock to create whatis referred to as hydrostatic hydrogen or hydrostatic pressure.

The largest Onshore Basin in the View of Oil and Gas Resources is the Gulf Coast Basin which Texas possesses some of the greatest extents. San Juan Basin is endowed with the major natural resources in New Mexico. Another region that holds extensive amounts is the Uinta Basin of Utah.

See Also:


Original Submission

posted by janrinok on Sunday September 22, @01:05AM   Printer-friendly
from the ready-for-the-times-to-get-better dept.

How Hope Beats Mindfulness When Times Are Tough:

A recent study finds that hope appears to be more beneficial than mindfulness at helping people manage stress and stay professionally engaged during periods of prolonged stress at work. The study underscores the importance of looking ahead, rather than living "in the moment," during hard times.

Mindfulness refers to the ability of an individual to focus attention on the present, in a way that is open, curious and not judgmental. Essentially, the ability to be fully in the moment.

"There's a lot of discussion about the benefits of mindfulness, but it poses two challenges when you're going through periods of stress," says Tom Zagenczyk, co-author of a paper on the work and a professor of management in North Carolina State University's Poole College of Management. "First, it's hard to be mindful when you're experiencing stress. Second, if it's a truly difficult time, you don't necessarily want to dwell too much on the experience you're going through.

[...] "Fundamentally, our findings tell us that hope was associated with people being happy, and mindfulness was not," says Kristin Scott, study co-author and a professor of management at Clemson University. "And when people are hopeful – and happy – they experience less distress, are more engaged with their work, and feel less tension related to their professional lives."

"Being mindful can be tremendously valuable – there are certainly advantages to living in the moment," says Sharon Sheridan, study co-author and an assistant professor of management at Clemson. "But it's important to maintain a hopeful outlook – particularly during periods of prolonged stress. People should be hopeful while being mindful – hold on to the idea that there's a light at the end of the tunnel."

While the study focused on musicians during an extreme set of circumstances, the researchers think there is a takeaway message that is relevant across industry sectors.

"Whenever we have high levels of job stress, it's important to be hopeful and forward looking," says Emily Ferrise, study co-author and a Ph.D. student at Clemson. "And to the extent possible, there is real value for any organization to incorporate hope and forward thinking into their corporate culture – through job conditions, organizational communications, etc."

"Every work sector experiences periods of high stress," says Zagenczyk. "And every company should be invested in having happy employees who are engaged with their work."

Journal Reference: Kristin L. Scott, Emily Ferrise, Sharon Sheridan, Thomas J. Zagenczyk, Work-related resilience, engagement and wellbeing among music industry workers during the Covid-19 pandemic: A multiwave model of mindfulness and hope, Stress and Health, 30 August 2024
https://doi.org/10.1002/smi.3466


Original Submission

posted by janrinok on Saturday September 21, @08:22PM   Printer-friendly
from the where's-my-minerals? dept.

Potential agreement comes despite fears Beijing will choke critical minerals supplies in response:

The US and Japan are close to a deal to curb tech exports to China's chip industry despite alarm in Tokyo about Beijing's threat to retaliate against Japanese companies.

The White House wants to unveil new export controls before November's presidential election, including a measure forcing non-US companies to get licences to sell products to China that would help its tech sector.

Biden administration officials have spent months in intense talks with their counterparts in Japan — and the Netherlands — to establish complementary export control regimes that would mean Japanese and Dutch companies are not targeted by the US "foreign direct product rule".

People in Washington and Tokyo familiar with the talks said the US and Japan were now close to a breakthrough, although a Japanese official cautioned the situation remained "quite fragile" because of fears of Chinese retaliation.

[...] The US export controls are designed to close loopholes in existing rules and add restrictions that reflect the fast progress of Huawei and other Chinese groups in chip production over the past two years.

[...] China said it "firmly opposes the abuse of export controls" and urged "relevant countries" to abide by international economic and trade rules.

Also at ZeroHedge.

Related:


Original Submission

posted by hubie on Saturday September 21, @03:33PM   Printer-friendly
from the arguments-about-dresses dept.

A visual neuroscientist realized he saw green and blue differently to his wife. He designed an interactive site that has received over 1.5m visits:

It started with an argument over a blanket.

"I'm a visual neuroscientist, and my wife, Dr Marissé Masis-Solano, is an ophthalmologist," says Dr Patrick Mineault, designer of the viral web app ismy.blue. "We have this argument about a blanket in our house. I think it's unambiguously green and she thinks it's unambiguously blue."

Mineault, also a programmer, was fiddling with new AI-assisted coding tools, so he designed a simple colour discrimination test.

If you navigate to ismy.blue, you'll see the screen populated with a colour and will be prompted to select whether you think it's green or blue. The shades get more similar until the site tells you where on the spectrum you perceive green and blue in comparison with others who have taken the test.

"I added this feature, which shows you the distribution, and that really clicked with people," says Mineault. "'Do we see the same colours?' is a question philosophers and scientists – everyone really – have asked themselves for thousands of years. People's perceptions are ineffable, and it's interesting to think that we have different views."

Apparently, my blue-green boundary is "bluer" than 78% of others, meaning my green is blue to most people. How can that be true?

Our brains are hard-wired to distinguish colours via retinal cells called cones, according to Julie Harris, professor of psychology at the University of St Andrews, who studies human visual processing. But how do we do more complex things like giving them names or recognising them from memory?

"Higher-level processing in terms of our ability to do things like name colours is much less clear," says Harris, and could involve both cognition and prior experience.

[...] Most differences in colour perception are physiological, like colour blindness, which affects one in 10 men and one in 100 women. Others, however, may be connected to aspects of culture or language.

The Sapir-Whorf hypothesis of linguistic relativity, popularised in the movie Arrival, suggests that language shapes the way we think, and even how we perceive the world. In the 1930s, Benjamin Lee Whorf argued that the world consisted of "a kaleidoscopic flux of impressions organised ... largely by the linguistic systems of our minds", pointing to, for instance, the Inuits' multiple words for "snow" as an example of differences in cultural perceptions.

Although this theory continues to be hotly debated throughout linguistics, psychology and philosophy, language does inform how we communicate ideas. There's no word for "blue" in ancient Greek, for example, which is why Homer described stormy seas as "wine-dark" in The Odyssey. By contrast, Russian has distinct words for light blue and dark blue. However, recent research suggests a greater vocabulary may only be beneficial for remembering colours and not for perceiving them.

Before you fight online about whether a particular shade is aqua or cyan, it's important to note that ismy.blue's results have limitations. The slightest variation in viewing conditions influences colour perception, which is why vision researchers take such care when designing experiments. Factors like the model of your phone or computer, its age, display settings, ambient light sources, time of day and even which colour is presented first in the test will all play a role in your responses.

Night modes in particular increase the redness of a device's screen, causing blues to appear greener. To see if this was influencing test results, Mineault separated the data into two groups: before or after 6pm. The effect was immediately apparent, especially on devices with built-in night modes.

So what's the point of ismy.blue if it's so variable? In the end, it's just entertainment. But if you'd like results with a little more equivalence, Mineault suggests doing the exercise with others on the same device, so that "everybody's in the same lighting and the same place".

[...] One question remains, though: what colour is the blanket?

"We've taken the test a bunch of times," says Mineault. "As soon as there's a little green in there, I call it green"; his wife sees blue.

The solution? Maybe just buy a new one.

See also:
    •Is my blue your blue?
    •Color blindness - Wikipedia


Original Submission

posted by hubie on Saturday September 21, @10:48AM   Printer-friendly
from the act-now-for-a-limited-time dept.

https://arstechnica.com/gadgets/2024/09/amazon-accused-of-using-false-and-misleading-sales-prices-to-sell-fire-tvs/

A lawsuit is seeking to penalize Amazon for allegedly providing "fake list prices and purported discounts" to mislead people into buying Fire TVs.

As reported by Seattle news organization KIRO 7, a lawsuit seeking class-action certification and filed in US District Court for the Western District of Washington on September 12 [PDF] claims that Amazon has been listing Fire TV and Fire TV bundles with "List Prices" that are higher than what the TVs have recently sold for, thus creating "misleading representation that customers are getting a 'Limited time deal.'" The lawsuit accuses Amazon of violating Washington's Consumer Protection Act.
[...]
Camelcamelcamel, which tracks Amazon prices, claims that the cheapest price of the TV on Amazon was $280 in July. The website also claims that the TV's average price is $330.59; the $300 or better deal seems to have been available on dates in August, September, October, November, and December of 2023, as well as in July, August, and September 2024. The TV was most recently sold at the $449.99 "List Price" in October 2023 and for short periods in July and August 2024, per Camelcamelcamel.
[...]
The lawsuit claims that in some cases, the List Price was only available for "an extremely short period, in some instances as short as literally one day."
[...]
Further, Amazon is accused of using these List Price tactics to "artificially" drive Fire TV demand, putting "upward pressure on the prices that" Amazon can charge for the smart TVs.

The legal document points to a similar 2021 case in California [PDF], where Amazon was sued for allegedly deceptive reference prices. It agreed to pay $2 million in penalties and restitution.

Other companies selling electronics have also been scrutinized for allegedly making products seem like they typically and/or recently have sold for more money. For example, Dell Australia received an AUD$10 million fine (about $6.49 million) for "making false and misleading representations on its website about discount prices for add-on computer monitors," per the Australian Competition & Consumer Commission.


Original Submission

posted by hubie on Saturday September 21, @06:05AM   Printer-friendly

https://www.inverse.com/entertainment/denis-villeneuve-rendezvous-with-rama-update

Denis Villeneuve can't stop making movies based on books. The Arrival and Blade Runner 2049 director has delivered two high-budget and polished Dune blockbusters in a row, while a third is on the way. He also has three other book adaptations in the works for when he's finished with Paul Atreides, and the director finally gave a promising update on one of those projects that could be the perfect follow-up to Dune.

In a conversation with Vanity Fair, Villeneuve addressed the three other book-based movies he has in development: Cleopatra, based on the biography by Stacy Schiff, Nuclear War: A Scenario, based on the nonfiction book by Annie Jacobsen, and Rendezvous With Rama, based on the sci-fi novel by Arthur C. Clarke. "I'm working on Rendezvous With Rama and that screenplay is slowly moving forward," he said.

Rendezvous with Rama isn't nearly as well known as Clarke's best-known novel, 2001: A Space Odyssey, but it's definitely well-suited to Villeneuve's mystical, epic style. The first book in a series, Rendezvous with Rama follows a group of human explorers in the distant future as they explore a mysterious alien spaceship that's hurtling towards the sun.


Original Submission

posted by hubie on Saturday September 21, @01:18AM   Printer-friendly

Google says it no longer auctions off ad space in the ways alleged:

A trial under way in federal court in Alexandria, Virginia, will determine if Google's ad tech stack constitutes an illegal monopoly. The first week has included a deep dive into exactly how Google's products work together to conduct behind-the-scenes electronic auctions that place ads in front of consumers in the blink of an eye.

Online advertising has rapidly evolved. Fifteen or so years ago, if you saw an internet display ad, there was a pretty good chance it featured people dancing over their enthusiasm for low mortgage rates, and those ads were foisted on you whether you were looking at real estate or searching for baseball scores.

Now, the algorithms that match ads to your interests are carefully calibrated, sometimes to an almost creepy extent.

Google, for its part, says it has invested billions of dollars to improve the quality of ads that consumers see, and ensure that advertisers can reach the consumers they're seeking.

The Justice Department contends that what Google has also done over the years is rig the automated auctions of ad sales to favor itself over other would-be players in the industry, and also deprived the publishing industry of hundreds of millions of dollars it would have received if the auctions were truly competitive.

[...] In the government's depiction, there are three distinct tools that interact to sell an ad and place it in front of a consumer. There's the ad servers used by publishers to sell space on their websites, particularly the rectangular ads that appear on the top and right-hand side of a web page. Ad networks are used by advertisers to buy ad space across an array of relevant websites.

And in between is the ad exchange, which matches the website publisher to the would-be advertiser by hosting an instant auction.

[...] For years, Google gave its ad exchange, called AdX, the first chance to match a publisher's proposed floor price. For instance, if a publisher wanted to sell a specific ad impression for a minimum of 50 cents, Google's software would give its own ad exchange the first chance to purchase. If Google's ad exchange bid 50 cents, it would win the auction, even if competing ad exchanges down the line were willing to pay more.

Google said the system was necessary to ensure ads loaded quickly. If the computers entertained bids from every ad exchange, it would take too long.

Publishers, dissatisfied with this system, found a workaround to conduct the auctions outside of Google's purview, a process that became known as "header bidding." Internal Google documents introduced at trial described header bidding as an "existential threat" to Google's market share.

Google's response relied on its control of all three components of the process. If publishers conducted an auction outside Google's purview but they still used Google's publisher ad server, called DoubleClick For Publishers, that software forced the winning bid back into Google's Ad Exchange. If Google was willing to match the price that publishers had received under the header-bidding auction, Google would win the auction.

Professor Ramamoorthi Ravi, an expert at Carnegie Mellon University, said rules imposed by Google failed to maximize value for publishers and "seem to have been designed to advantage Google's own products."

[...] Google, for its part, says it hasn't run auctions this way since 2019, and that in the last five years Google's share of the display ad market has begun to erode. It says that tying its buy side, sell side and middleman products together helps them run seamlessly and quickly, and minimizes fraudulent ads or malware risks.

Google also says its innovations over the last 15 years fueled the improvements in matching online ads to consumer interests. Google says it was at the forefront of introducing "real-time bidding," which allowed an advertiser selling shoes, for instance, to be paired up with a consumer whose online profile indicated an interest in purchasing shoes.

Those innovations, according to Google, allowed publishers to sell their available ad space at a premium because the advertiser would know that the ad was going to the eyeballs of someone interested in their product or service.

The Justice Department says that even though Google no longer runs its auctions in the ways described, it helped Google maintain its monopoly in the ad tech market in the years leading up to 2019, and that its existing monopoly allows Google to keep up to 36 cents on the dollar of every ad purchase it brokers when the transaction runs through all of its various products.


Original Submission

posted by janrinok on Friday September 20, @08:34PM   Printer-friendly
from the fake-department-of-fake-liars dept.

https://arstechnica.com/information-technology/2024/09/due-to-ai-fakes-the-deep-doubt-era-is-here/

Given the flood of photorealistic AI-generated images washing over social media networks like X [arstechnica.com] and Facebook [404media.co] these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back [nytimes.com] decades—and analog media long before [wikipedia.org] that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

[...] Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend [bu.edu] years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

Doubt has been a political weapon since ancient times [populismstudies.org]. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

[...] In April, a panel of federal judges [arstechnica.com] highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials.

[...] Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential "cultural singularity [fastcompany.com]," a threshold where truth and fiction in media become indistinguishable.

[...] "Deep doubt" is a new term, but it's not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of [theguardian.com] an upcoming "information apocalypse" due to deepfakes and questioned, "When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?"

[...] Throughout recorded history, historians and journalists have had to evaluate the reliability of sources [wm.edu] based on provenance, context, and the messenger's motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it's reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.

[...] You'll notice that our suggested counters to deep doubt above do not include watermarks, metadata, or AI detectors as ideal solutions. That's because trust does not inherently derive from the authority of a software tool. And while AI and deepfakes have dramatically accelerated the issue, bringing us to this new "deep doubt" era, the necessity of finding reliable sources of information about events you didn't witness firsthand is as old as history itself.

[...] It's likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled [thedailybeast.com] as AI-generated, creating ongoing pain for students in particular.

Throughout history, any form of recorded media, including ancient clay tablets, has been susceptible to forgeries [researchgate.net]. And since the invention of photography, we have never been able to fully trust a camera's output: the camera can lie [nytimes.com].

[...] Credible and reliable sourcing is our most critical tool in determining the value of information, and that's as true today as it was in 3000 BCE [wikipedia.org], when humans first began to create written records.


Original Submission

posted by hubie on Friday September 20, @03:49PM   Printer-friendly

https://cosmographia.substack.com/p/the-black-death-is-far-older-than

In 1338, among a scattering of obscure villages just to the west of Lake Issyk-Kul, Kyrgyzstan, people began dropping dead in droves. Among the many headstones found in the cemeteries of Kara-Djigach and Burana, one can read epitaphs such as "This is the grave of Kutluk. He died of the plague with his wife." Recently, ancient DNA exhumed from these sites has confirmed the presence of the plague bacterium Yersinia pestis, cause of the condition that became known as the Black Death. The strain detected in those remote graveyards of Central Asia has been identified as the most recent common ancestor of the plague that went on to kill as much as 60% of the Eurasian population in the great pandemic of the 14th-century.

[...] In 2018, a team of researchers found ancient traces of the plague bacterium in 4900-year old remains in Sweden. A few years later, traces of the bacterium were found in a 5000-year old skull in Latvia. It was tentatively suggested that these finds correlate with the Neolithic Decline, and might explain the large die off within these farming societies. However the cases were isolated, with some of the infected buried with uninfected, suggesting there wasn't an epidemic comparable to the Black Death outbreaks that would come in later millenniums.

[...] Whether the Neolithic Decline was mostly, or in part, caused by the plague is still up for debate, but one thing is clear: humanity has been battling Yersinia pestis for a long, long time.


Original Submission

posted by hubie on Friday September 20, @11:03AM   Printer-friendly
from the one-step-for-AI-one-giant-leap-for-the-hype-train dept.

https://arstechnica.com/information-technology/2024/09/openais-new-reasoning-ai-models-are-here-o1-preview-and-o1-mini/

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and API users.
[...]
In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"
[...]
AI benchmarks are notoriously unreliable and easy to game; however, independent verification and experimentation from users will show the full extent of o1's advancements over time. On top of that, MIT Research showed earlier this year that some of OpenAI's benchmark claims it touted with GPT-4 last year were erroneous or exaggerated.

One of the examples of o1's abilities that OpenAI shared is perhaps the least consequential and impressive, but it's the most talked about due to a recurring meme where people ask LLMs to count the number of Rs in the word "strawberry." Due to tokenization, where the LLM processes words in data chunks called tokens, most LLMs are typically blind to character-by-character differences in words.
[...]
It's no secret that some people in tech have issues with anthropomorphizing AI models and using terms like "thinking" or "reasoning" to describe the synthesizing and processing operations that these neural network systems perform.

Just after the OpenAI o1 announcement, Hugging Face CEO Clement Delangue wrote, "Once again, an AI system is not 'thinking', it's 'processing', 'running predictions',... just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it's more clever than it is."


Original Submission

posted by martyb on Friday September 20, @06:16AM   Printer-friendly
from the is-there-such-a-thing-as-a-good-patent-troll dept.

Tell Congress: We Can't Afford More Bad Patents:

A key Senate Committee is about to vote on two bills that would bring back some of the worst patents and empower patent trolls.

The Patent Eligibility Restoration Act (PERA), S. 2140, would throw out crucial rules that ban patents on many abstract ideas. Courts will be ordered to approve patents on things like ordering food on a mobile phone or doing basic financial functions online. If PERA Passes, the floodgates will open for these vague software patents that will be used to sue small companies and individuals. This bill even allows for a type of patent on human genes that the Supreme Court rightly disallowed in 2013.

A second bill, the PREVAIL Act, S. 2220, would sharply limit the public's right to challenge patents that never should have been granted in the first place.

Patent trolls—companies that have no product or service of their own, but simply make patent infringement demands on others—are a big problem. They've cost our economy billions of dollars. For a small company, a patent troll demand letter can be ruinous.

We took a big step towards fighting off patent trolls in 2014, when a landmark Supreme Court ruling, the Alice Corp. v. CLS Bank case, established that you can't get a patent by adding "on a computer" to an abstract idea. In 2012, Congress also expanded the ways that a patent can be challenged at the patent office.

These two bills, PERA and PREVAIL, would roll back both of those critical protections against patent trolls. We know that the bill sponsors, Sens. Thom Tillis (R-NC) and Chris Coons (D-DE) are pushing hard for these bills to move forward. We need your help to tell Congress that it's the wrong move.


Original Submission

posted by janrinok on Friday September 20, @01:31AM   Printer-friendly
from the I-spy-with-my-AI-eye dept.

https://www.businessinsider.com/larry-ellison-ai-surveillance-keep-citizens-on-their-best-behavior-2024-9[paywalled].
https://arstechnica.com/information-technology/2024/09/omnipresent-ai-cameras-will-ensure-good-behavior-says-larry-ellison/

Larry Ellison (of Oracle) predicts a future of AI enabled mass-surveillance where everyone lives in a panopticon. Constantly watched and recorded by AI that reports all transgressions.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison's vision bears more than a passing resemblance to the cautionary world portrayed in George Orwell's prescient novel 1984. In Orwell's fiction, the totalitarian government of Oceania uses ubiquitous "telescreens" to monitor citizens constantly, creating a society where privacy no longer exists and independent thought becomes nearly impossible.

(Here's looking at you, kin?)


Original Submission