Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The shambling corpse of Steve Jobs lumbers forth, heeding not the end of October! How will you drive him away?

  • Flash running on an Android phone, in denial of his will
  • Zune, or another horror from darkest Redmond
  • Newton, HyperCard, or some other despised interim Apple product
  • BeOS, the abomination from across the sea
  • Macintosh II with expansion slots, in violation of his ancient decree
  • Tow his car for parking in a handicap space without a permit
  • Oncology textbook—without rounded corners
  • Some of us are still in mourning, you insensitive clod!

[ Results | Polls ]
Comments:23 | Votes:55

posted by hubie on Tuesday September 17, @09:15PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

For some context, Loongson is a Chinese fabless chip company grown out of the country's state-sponsored efforts to develop domestic CPUs. Its 3A6000 chip launched in late 2023 is claimed to match AMD's Zen 3 and Intel's Tiger Lake architectures from 2020. While the company has mostly played in the CPU space until now, the GPU offerings represent a new push.

Their current model, the 9A1000, is a pretty tame GPU aimed at budget systems and low-end AI tasks. But the 9A2000 is allegedly taking things to the next level.

According to Loongson's Chairman and General Manager Hu Weiwu, the 9A2000 delivers performance up to 10 times higher than its predecessor. He claimed it should be "comparable to Nvidia RTX 2080," according to ITHome.

[...] That said, Loongson has another card to play. At the same briefing, Weiwu also provided a teaser for their next-gen 3B6600 CPU, making some lofty performance claims about its architecture. He touted "significant" changes under the hood that should elevate its single-threaded muscle to "world-leading" levels, according to another ITHome report.

Previous leaks suggest this processor will pack eight LA864 cores clocked at a stout 3GHz, along with integrated LG200 graphics.

As for the launch, Weiwu gave a tentative first half of 2025 target for initial production, with mass availability hopefully following in the second half of next year.

Loongson has typically played more of a supporting role in the CPU arena and is yet to make a dent outside of China. But if this 3B6600 chip can truly hang with the heavyweights of x86 and Arm in per-core performance, it would mark a major step up for the company.


Original Submission

posted by hubie on Tuesday September 17, @04:27PM   Printer-friendly
from the all-in-the-family dept.

Arthur T Knackerbracket has processed the following story:

Neandertals traveled at least two evolutionary paths on their way to extinction around 40,000 years ago, a new study suggests.

Whether classified as a separate species or a variant of Homo sapiens, Neandertals have typically been viewed as a genetically consistent population. But an adult male’s partial skeleton discovered in France contains genetic clues to a Neandertal line that evolved apart from other European Neandertals for around 50,000 years, nearly up to the time these close relatives of H. sapiens died out, researchers say.

The possibility of a long-lasting, isolated Neandertal population in southwestern Europe supports the idea that these hominids “very likely had their own, complex evolutionary history, with local extinctions and migrations, just like us,” says paleogeneticist Carles Lalueza-Fox of the Institute of Evolutionary Biology in Barcelona, who did not participate in the new study.

A team led by archaeologist Ludovic Slimak of Université Toulouse III – Paul Sabatier in France and population geneticist Martin Sikora of the University of Copenhagen nicknamed the French Neandertal discovery Thorin, after a character in J.R.R. Tolkien’s book The Hobbit. Thorin’s remains, discovered at the entrance of Grotte Mandrin rock shelter in 2015, are still being excavated.

Several dating methods applied to teeth from Thorin and animals buried near his body, as well as Thorin’s position in Grotte Mandrin sediment, indicate that this Neandertal lived between around 50,000 and 42,000 years ago, Slimak’s and Sikora’s group reports September 11 in Cell Genomics.

Molecular segments representing about 65 percent of Thorin’s genome were recovered from a molar, Sikora says. Thorin’s DNA was then compared with DNA previously extracted from other Neandertals, ancient H. sapiens and present-day people.

Arrays of gene variants in Thorin’s DNA more closely align with the previously reported DNA structure of Neandertals that lived around 105,000 years ago, versus Neandertals dating to around 50,000 to 40,000 years ago. Yet analyses of carbon and other diet-related chemical elements in Thorin’s bones and teeth suggest that he lived during an ice age, which did not develop in Europe until about 50,000 years ago.

Thorin also inherited from his parents an unusually high percentage of DNA segments containing consecutive pairs of identical gene variants. Reduced genetic variation of that kind, previously found in Siberian Neandertals, reflects mating among close relatives in a small population (SN: 10/19/22).

Taken together, the genetic evidence fits a scenario in which Thorin belonged to a Neandertal lineage that split from other European Neandertals around 105,000 years ago, the researchers say. For roughly the next 50,000 years, they suspect, Thorin’s lineage consisted of small networks of closely related communities that exchanged mates.

Reasons why those ancient groups avoided mating with other Neandertals in the region, possibly related to language or cultural differences, are unclear, Sikora says.

[...] “If Thorin is really 50,000 years old, this would be an amazing finding showing a strong genetic structure in late Neandertals,” says paleogeneticist Cosimo Posth of the University of Tübingen in Germany. But, he says, further excavation and research at Grotte Mandrin will need to confirm when Thorin lived.

Researchers found Thorin’s remains in a small, natural depression on the rock shelter floor. Slimak’s and Sikora’s group cannot yet say how the body got there or whether it originated in older sediment. An older date for the partial skeleton would indicate, less surprisingly, that Thorin belonged to an isolated population that petered out quickly.

Long-term isolation would have resulted in Thorin inheriting a greater number of short DNA segments containing identical gene pairs than reported in the new study, Lalueza-Fox says. Isolating more of Thorin’s DNA or collecting genetic remnants from other fossil members of his lineage will clarify the evolutionary story of these close-knit Neandertals, he says.

L. Slimak et al. Long genetic and social isolation in Neanderthals before their extinction. Cell Genomics. Published online September 11, 2024. doi: 10.1016/j.xgen.2024.100593.


Original Submission

posted by janrinok on Tuesday September 17, @11:45AM   Printer-friendly
from the dystopia-is-now,-so-hide-your-checkbook! dept.

https://arstechnica.com/information-technology/2024/09/my-dead-father-is-writing-me-notes-again/

Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.

Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone's handwriting very accurately if specially trained to do so.

[...] I admit that copying someone's handwriting so convincingly could bring dangers. I've been warning for years about an upcoming era where digital media creation and mimicry is completely and effortlessly fluid, but it's still wild to see something that feels like magic work for the first time.

[...] As a daily tech news writer, I keep an eye on the latest innovations in AI image generation. Late last month while browsing Reddit, I noticed a post from an AI imagery hobbyist who goes by the name "fofr"—pronounced "Foffer," he told me, so let's call him that for convenience. Foffer announced that he had replicated J.R.R. Tolkien's handwriting using scans found in archives online .

[...] Foffer's breakthrough was realizing that Flux can be customized using a special technique called "LoRA" (short for "low-rank adaptation") to imitate someone's handwriting in a very realistic way. LoRA is a modular method of fine-tuning Flux to teach it new concepts that weren't in its original training dataset—the initial set of pictures and illustrations its creator used to teach it how to synthesize images.

[...] "I don't want to encourage people to copy other's handwriting, especially signatures," Foffer told me in an interview the day he took the Tolkien model down. But said he would help me attempt to apply his technique to a less famous individual for an article, telling me how I could inexpensively train my own image synthesis model on a cloud AI hosting site called Replicate. "I think you should try it. I think you'll be surprised how fun and easy it is," he said.

[...] My dad was an electronics engineer, and he had a distinctive way of writing in all-caps that was instantly recognizable to me throughout his life. [...] I began the task of assembling a "dad's uppercase" dataset.

[...] using neural networks to model handwriting isn't new. In January 2023, we covered a web app called Calligrapher.ai that can simulate dynamic handwriting styles (based on 2013 research from Alex Graves). A blog post from 2016 written by machine learning scientist Sam Greydanus details another method of creating AI-generated handwriting, and there's a company called Handwrytten that sells robots that write actual letters, with pen on paper, using simulated human handwriting for marketing purposes.

What's new in this instance is that we're using Flux, a free open-weights AI model anyone can download or fine-tune, to absorb and reproduce handwriting styles.

[...] I felt joy to see newly synthesized samples of Dad's handwriting again. They read to me like his written voice, and I can feel the warmth just seeing the letters. I know it's not real and he didn't really write it, so I personally find it fun.


Original Submission

posted by janrinok on Tuesday September 17, @07:03AM   Printer-friendly

Members of the North Korean hacker group Lazarus posing as recruiters are baiting Python developers with coding test project:

Members of the North Korean hacker group Lazarus posing as recruiters are baiting Python developers with coding test project for password management products that include malware.

The attacks are part of the 'VMConnect campaign' first detected in August 2023, where the threat actors targeted software developers with malicious Python packages uploaded onto the PyPI repository.

According to a report from ReversingLabs, which has been tracking the campaign for over a year, Lazarus hackers host the malicious coding projects on GitHub, where victims find README files with instructions on how to complete the test.

The directions are meant to provide a sense professionalism and legitimacy to the whole process, as well as a sense of urgency.

ReversingLabs found that the North Koreans impersonate large U.S. banks like Capital One to attract job candidates, likely offering them an enticing employment package.

Further evidence retrieved from one of the victims suggests that Lazarus actively approaches their targets over LinkedIn, a documented tactic for the group.

The hackers direct candidates to find a bug in a password manager application, submit their fix, and share a screenshot as proof of their work.

The README file for the project instruct the victim first to execute the malicious password manager application ('PasswordManager.py') on their system and then start looking for the errors and fixing them.

That file triggers the execution of a base64 obfuscated module hidden in the'_init_.py' files of the 'pyperclip' and 'pyrebase' libraries.

The obfuscated string is a malware downloader that contacts a command and control (C2) server and awaits for commands. Fetching and running additional payloads is within its capabilities.

To make sure that the candidates won't check the project files for malicious or obfuscated code, the README file require the task to be completed quickly: five minutes for building the project, 15 minutes to implement the fix, and 10 minutes to send back the final result.

This is supposed to prove the developer's expertise in working with Python projects and GitHub, but the goal is to make the victim skip any security checks that may reveal the malicious code.


Original Submission

posted by janrinok on Tuesday September 17, @02:17AM   Printer-friendly
from the it-is-about-time-somebody-sued-the-bastards dept.

Article: https://dailynous.com/2024/09/13/journal-publishers-sued-on-antitrust-grounds/

From Dailynous:

Lucina Uddin, a professor of psychology at the University of California, Los Angeles, is the named plaintiff in an antitrust lawsuit against six publishers of academic journals: Elsevier, Wolters Kluwer, Wiley, Sage, Taylor and Francis Group, and Springer. The lawsuit accuses the publishers of collusion in violation of Section 1 of the Sherman Act, stating that they "conspired to unlawfully appropriate billions of dollars that would have otherwise funded scientific research."

The plaintiff "seeks to recover treble damages, an injunction, and other relief."

The lawsuit, filed in federal district court in New York, can be read in its entirety here. The early paragraphs in which the publishers' "scheme" is described are a good read:

The Publisher Defendants' Scheme has three primary components. First, the Publisher Defendants agreed to not compensate scholars for their labor, in particular not to pay for their peer review services (the "Unpaid Peer Review Rule"). In other words, the Publisher Defendants agreed to fix the price of peer review services at zero. The Publisher Defendants also agreed to coerce scholars into providing their labor for nothing by expressly linking their unpaid labor with their ability to get their manuscripts published in the Publisher Defendants' journals. In the "publish or perish" world of academia, the Publisher Defendants essentially agreed to hold the careers of scholars hostage so that the Publisher Defendants could force them to provide their valuable labor for free.

Second, the Publisher Defendants agreed not to compete with each other for manuscripts by requiring scholars to submit their manuscripts to only one journal at a time (the "Single Submission Rule"). The Single Submission Rule substantially reduces competition among the Publisher Defendants, substantially decreasing incentives to review manuscripts promptly and publish meritorious research quickly. The Single Submission Rule also robs scholars of negotiating leverage they otherwise would have had if more than one journal offered to publish their manuscripts. Thus, the Publisher Defendants know that if they offer to publish a manuscript, the submitting scholar has no viable alternative and the Publisher Defendant can then dictate the terms of publication.

Third, the Publisher Defendants agreed to prohibit scholars from freely sharing the scientific advancements described in submitted manuscripts while those manuscripts are under peer review, a process that often takes over a year (the "Gag Rule"). From the moment scholars submit manuscripts for publication, the Publisher Defendants behave as though the scientific advancements set forth in the manuscripts are their property, to be shared only if the Publisher Defendants grant permission. Moreover, when the Publisher Defendants select manuscripts for publication, the Publisher Defendants will often require scholars to sign away all intellectual property rights, in exchange for nothing. The manuscripts then become the actual property of the Publisher Defendants, and the Publisher Defendants charge the maximum the market will bear for access to that scientific knowledge.


Original Submission

posted by janrinok on Monday September 16, @09:32PM   Printer-friendly

A systematic review into the potential health effects from radio wave exposure has shown mobile phones are not linked to brain cancer:

Mobile phones are often held against the head during use. And they emit radio waves, a type of non-ionising radiation. These two factors are largely why the idea mobile phones might cause brain cancer emerged in the first place.

The possibility that mobile phones might cause cancer has been a long-standing concern. Mobile phones – and wireless tech more broadly – are a major part of our daily lives. So it's been vital for science to address the safety of radio wave exposure from these devices.

Over the years, the scientific consensus has remained strong – there's no association between mobile phone radio waves and brain cancer, or health more generally.

Despite the consensus, occasional research studies have been published that suggested the possibility of harm.

In 2011, the International Agency for Research on Cancer (IARC) classified radio wave exposure as a possible carcinogen to humans. The meaning of this classification was largely misunderstood and led to some increase in concern.

IARC is part of the World Health Organization. Its classification of radio waves as a possible carcinogen was largely based on limited evidence from human observational studies. Also known as epidemiological studies, they observe the rate of disease and how it may be caused in human populations.

Observational studies are the best tool researchers have to investigate long-term health effects in humans, but the results can often be biased.

The IARC classification relied on previous observational studies where people with brain cancer reported they used a mobile phone more than they actually did. One example of this is known as the INTERPHONE study.

This new systematic review of human observational studies is based on a much larger data set compared to what the IARC examined in 2011.

[...] It is the most comprehensive review on this topic – it considered more than 5,000 studies, of which 63, published between 1994 and 2022, were included in the final analysis. The main reason studies were excluded was that they were not actually relevant; this is very normal with search results from systematic reviews.

No association between mobile phone use and brain cancer, or any other head or neck cancer, was found.

There was also no association with cancer if a person used a mobile phone for ten or more years (prolonged use). How often they used it – either based on the number of calls or the time spent on the phone – also didn't make a difference.

Importantly, these findings align with previous research. It shows that, although the use of wireless technologies has massively increased in the past few decades, there has been no rise in the incidence of brain cancers.

Journal Reference:Karipidis et al., The effect of exposure to radiofrequency fields on cancer risk in the general and working population: A systematic review of human observational studies – Part I: Most researched outcomes, Environment International, Volume 191, September 2024, 108983. DOI: https://doi.org/10.1016/j.envint.2024.108983


Original Submission

posted by janrinok on Monday September 16, @04:51PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The latest State of the Energy Union report shows that renewable energy has become a major power provider in the EU, but it also warned that efforts need to be stepped up to meet important climate goals.

The EU has revealed significant progress in its renewable energy goals and in reducing its emissions, though it notes some key challenges to its progress.

The bloc’s latest State of the Energy Union report shows that for the first half of 2024, renewable energy such as solar and wind met 50pc of the electricity demand. The report also found that the EU’s gas demand dropped by 138bn cubic metres between August 2022 and May 2024.

Geopolitical issues played a role in changing gas demand – the report says the share of Russian gas in EU imports dropped from 45pc in 2021 to 18pc by June 2024, while imports from other countries including Norway and the US have increased.

The EU also reported some success in reducing greenhouse gas (GHG) emissions. The bloc’s emissions fell by 32.5pc from 1990 and 2022, while the EU economy grew by around 67pc in the same period.

However, the report noted that efforts in the renewable energy sector will need to be stepped up to meet the EU’s goal of reducing energy consumption by 11.7pc by 2030 and to reduce its greenhouse gas emissions.

“Economy wide GHG emission projections, recently submitted by Member States, are expected to show some gap with the EU climate ambition,” the EU report said. “To stay on track with the EU 2030 reduction target and climate neutrality by 2050, the EU needs to pick up the pace of change and increase the focus on areas where the required emission reductions are significant.”

Maroš Šefčovič, executive VP for the European Green Deal, said the report shows “unprecedented progress” despite being in turbulent times and facing challenges ahead.

“Emissions are falling, and renewables play a prominent role in our energy system today,” Šefčovič said. “We should swiftly implement the new policy and regulatory framework to address the elevated energy prices, and accelerate development of infrastructure.”

The report calls on Member States to submit their final National Energy and Climate Plans “as soon as possible” to ensure the EU can meet its 2030 energy and climate goals.

Earlier this year, the European Commission recommended that the EU aims for a 90pc net reduction in GHG emissions by 2040 to be able to meet its target of net-zero emissions by 2050. But many feel the plan focuses too much on untested tech and not enough on circularity.

The UN global stocktake – which took place at COP28 last year – revealed that progress has been far too slow, with national commitments falling well short of emissions reductions targets.

Meanwhile, Ireland’s energy-related emissions reached their lowest level in 30 years last year, falling by 7pc according to the Sustainable Energy Authority of Ireland. But this report also warned that Ireland is highly reliant on both fossil fuels and imported energy, and the country is still not on track to remain within its 2021-2025 carbon budget.


Original Submission

posted by janrinok on Monday September 16, @12:07PM   Printer-friendly
from the too-little-too-late dept.

https://arstechnica.com/gaming/2024/09/unity-is-dropping-its-unpopular-per-install-runtime-fee/

Unity, maker of a popular cross-platform [game] engine and toolkit, will not pursue a broadly unpopular Runtime Fee that would have charged developers based on game installs rather than per-seat licenses. The move comes exactly one year after the fee's initial announcement.

In a blog post attributed to President and CEO Matt Bromberg, the CEO writes that the company cannot continue "democratizing game development" without "a partnership built on trust." Bromberg states that customers understand the necessity of price increases, but not in "a novel and controversial new form." So game developers will not be charged per installation, but they will be sorted into Personal, Pro, and Enterprise tiers by level of revenue or funding.

[...] Unity's announcement of a new "Runtime Fee that's based on game installs" in mid-September 2023 (Wayback archive), while joined by cloud storage and "AI at runtime," would have been costly for smaller developers who found success.

[...] The move led to almost immediate backlash from many developers. Unity, whose then-CEO John Riccitiello had described in 2015 as having "no royalties, no [f-ing] around," was "quite simply not a company to be trusted," wrote Necrosoft Games' Brandon Sheffield. Developers said they would hold off updates or switch engines rather than absorb the fee, which would have retroactively counted installs before January 2024 toward its calculations.

[...] A massive wave of layoffs throughout the winter of 2023 and 2024 showed that Unity's financial position was precarious, partly due to acquisitions during Riccitiello's term. The Runtime Fee would have minimal impact in 2024, the company said in filings, but would "ramp from there as customers adopt our new releases."

Instead of ramping from there, the Runtime Fee is now gone, and Unity has made other changes to its pricing structure:

  • Unity Personal remains free, and its revenue/funding ceiling increases from $100,000 to $200,000
  • Unity Pro, for customers over the Personal limit, sees an 8 percent price increase to $2,200 per seat
  • Unity Enterprise, with customized packages for those over $25 million in revenue or funding, sees a 25 percent increase.

Previously on SoylentNews:
Why Unity Felt the Need to "Rush Out" its Controversial Install-Fee Program - 20231027
Unity CEO John Riccitiello is Retiring, Effective Immediately - 20231011
Kerbal Space Program 2 Has a Big Pre-Launch Issue: Windows Registry Stuffing - 20231003
Unity Dev Group Dissolves After 13 Years Over "Completely Eroded" Company Trust - 20230927
Unity Makes Major Changes to Controversial Install-Fee Program - 20230925
EU Game Devs Ask Regulators to Look at Unity's "Anti-Competitive" Bundling - 20230923
Unity Promises "Changes" to Install Fee Plans as Developer Fallout Continues - 20230918
Developer Dis-Unity - 20230915

Related news elsewhere:
Unity lays off an additional 25 percent of its staffers - 20240109
2024 Unity Gaming Report indicates 62 percent of devs are currently using AI tools - 20240318
Here's Why Unity Software (U) Stock Hit All-Time Lows - 20240910


Original Submission

posted by janrinok on Monday September 16, @07:18AM   Printer-friendly
from the orbital-advertising-banners dept.

Texas Startup Keeps Launching These Obnoxiously Large Satellites—and the Worst Is Yet to Come

Five BlueBird satellites have launched as part of AST SpaceMobile's growing constellation, with even larger ones ahead that may pose a threat to clear night skies.

Bad news for sky watchers: Earth's orbit has been littered by five more gigantic satellites which are poised to become the brightest objects in the night sky.

The five communication satellites, called BlueBirds, launched on board a SpaceX Falcon 9 rocket on Thursday at 4:52 a.m. ET. Each satellite is equipped with the largest ever commercial communications array to be deployed in low Earth orbit, according to AST SpaceMobile. The company's prototype satellite unfurled its giant array in late 2022, outshining most objects in the skies except for the Moon, Venus, Jupiter, and seven of the brightest stars. Now, there's five more of them, as the company builds out its satellite constellation.

AST SpaceMobile is seeking to create the first space-based cellular broadband network directly accessible by cell phones. [...]

[....] AST SpaceMobile wants to build a constellation of more than 100 satellites. On its own, one satellite is bright enough to mess with observations of the cosmos.

[....] The newly launched satellites are just as large as the prototype, but future models could be even larger. "We're just getting started," Avellan said during a livestream, Space.com reported.

[....] ST SpaceMobile isn't the only company trying to build cellular towers in space. SpaceX has launched more than 7,000 satellites to date, and new batches of its Starlink satellites keep making their way to low Earth orbit. Amazon, OneWeb, and Lynk Global are other companies trying to get in on the action.

At least we could access social media or control our IoT devices from the oceans to the remotest desert or mountain.


Original Submission

posted by hubie on Monday September 16, @02:36AM   Printer-friendly

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world's electricity, and only nine countries possess nuclear weapons.

Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world's most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.

[Source]: TIME.com

Do you think that such a regulatory framework would work ?


Original Submission

posted by janrinok on Sunday September 15, @09:49PM   Printer-friendly

Some of our favorite food crops around the world aren't reaching their full potential:

Insects that provide the crucial service of pollination are declining en masse, and that has serious consequences for the world's food crops, 75 percent of which depend at least partially – if not entirely – on insect pollination.

While this doesn't include major food crops like rice and wheat, pollination is essential to what the study's first author – ecologist Katherine Turo from Rutgers University in the US – refers to as "nutrient-dense and interesting foods that we like and are culturally relevant".

"If you look through a list of crops and think about which fruits and vegetables you're most excited to eat – like summer berries or apples and pumpkins in the fall – those are the crops that typically need to be pollinated by insects," Turo says.

And yet, there's a lack of experimental research on pollinator limitation in crops. While we know the phenomenon is impacting global food supplies, its prevalence has so far been unclear.

[...] Within this detailed picture, Turo and colleagues found that up to 60 percent of global crop systems are being limited by insufficient pollination. The phenomenon is affecting 25 of the 49 different crop species analyzed, with blueberry, coffee, and apple crops being the worst affected.

Pollinator limitation is occurring in 85 percent of the countries in this database, spanning all six continents represented.

"Our findings are a cause for concern and optimism," says Turo.

"We did detect widespread yield deficits. However, we also estimate that, through continued investment in pollinator management and research, it is likely that we can improve the efficiency of our existing crop fields to meet the nutritional needs of our global population."

[...] "Our findings show that by paying more attention to pollinators, growers could make agricultural fields more productive."

That might be harder than it sounds – insects are being hit with a lethal onslaught of disease, pesticides, shifting seasons, and habitat loss.

Perhaps quantifying these tiny but mighty allies' services to our billion-dollar industries will help us to take the threats they face more seriously.

Journal Reference:Turo, K.J., Reilly, J.R., Fijen, T.P.M. et al. Insufficient pollinator visitation often limits yield in crop systems worldwide. Nat Ecol Evol 8, 1612–1622 (2024). https://doi.org/10.1038/s41559-024-02460-2


Original Submission

posted by hubie on Sunday September 15, @05:06PM   Printer-friendly
from the They-got-paid-for-that? dept.

The ig-nobels for 2024 have been announced. If you don't know what they are:

Curiosity is the driving force behind all science, which may explain why so many scientists sometimes find themselves going in some decidedly eccentric research directions. Did you hear about the WWII plan to train pigeons as missile guidance systems? How about experiments on the swimming ability of a dead rainbow trout or that time biologists tried to startle cows by popping paper bags by their heads? These and other unusual research endeavors were honored tonight in a virtual ceremony to announce the 2024 recipients of the annual Ig Nobel Prizes. Yes, it's that time of year again, when the serious and the silly converge—for science.

Hope you weren't expecting to get any work done for the next hour or two.


Original Submission

posted by hubie on Sunday September 15, @12:18PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The US government has noticed the potentially negative effects of generative AI on areas like journalism and content creation. Senator Amy Klobuchar, along with seven Democrat colleagues, urged the Federal Trade Commission (FTC) and Justice Department to probe generative AI products like ChatGPT for potential antitrust violations, they wrote in a press release.

"Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms," the letter states. "The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work."

The lawmakers went on to note that traditional search results lead users to publishers' websites while AI-generated summaries keep the users on the search platform "where that platform alone can profit from the user's attention through advertising and data collection."

These products also have significant competitive consequences that distort markets for content. When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.

The fact that AI may be scraping news sites and then not even directing users to the original source could be a form of "exclusionary conduct or an unfair method of competition in violation of antitrust laws," the lawmakers concluded. (That's on top being a potential violation of copyright laws, but that's another legal battle altogether.)


Original Submission

posted by hubie on Sunday September 15, @07:35AM   Printer-friendly

https://martypc.blogspot.com/2024/08/pc-floppy-copy-protection-softguard.html

Softguard Systems was founded by Joseph Diodati, Paul Sachse and Ken Williams in 1983¹. The company went public in 1984, and by 1985 was one of the industry leaders in copy protection technology, although they produced a few other unrelated products as well.

Advertisements for their copy-protection product, SUPERLoK, were commonly seen in the classified sections of publications such as InfoWorld and PC Magazine.

The original Superlok product required professional disk duplication to lay down the requisite copy protection track. Eventually, Softguard would produce the "SUPERLoK KIT," which was writable with a standard PC floppy controller. An advertisement for the Kit can be seen above, left. The Kit version was aimed at smaller developers on a budget, and did not offer the same level of protection. This article will focus on the original Superlok product.


Original Submission

posted by hubie on Sunday September 15, @02:53AM   Printer-friendly
from the hopefully-useful-AND-correct dept.

In groups people screen out chatter around them - and now technology can do the same:

It's the perennial "cocktail party problem" - standing in a room full of people, drink in hand, trying to hear what your fellow guest is saying.

In fact, human beings are remarkably adept at holding a conversation with one person while filtering out competing voices.

However, perhaps surprisingly, it's a skill that technology has until recently been unable to replicate.

And that matters when it comes to using audio evidence in court cases. Voices in the background can make it hard to be certain who's speaking and what's being said, potentially making recordings useless.

Electrical engineer Keith McElveen, founder and chief technology officer of Wave Sciences, became interested in the problem when he was working for the US government on a war crimes case.

"What we were trying to figure out was who ordered the massacre of civilians. Some of the evidence included recordings with a bunch of voices all talking at once - and that's when I learned what the "cocktail party problem" was," he says.

"I had been successful in removing noise like automobile sounds or air conditioners or fans from speech, but when I started trying to remove speech from speech, it turned out not only to be a very difficult problem, it was one of the classic hard problems in acoustics.

"Sounds are bouncing round a room, and it is mathematically horrible to solve."

The answer, he says, was to use AI to try to pinpoint and screen out all competing sounds based on where they originally came from in a room.

This doesn't just mean other people who may be speaking - there's also a significant amount of interference from the way sounds are reflected around a room, with the target speaker's voice being heard both directly and indirectly.

In a perfect anechoicchamber - one totally free from echoes - one microphone per speaker would be enough to pick up what everyone was saying; but in a real room, the problem requires a microphone for every reflected sound too.

[...] And, he adds: "We knew there had to be a solution, because you can do it with just two ears."

[...] What they had come up with was an AI that can analyse how sound bounces around a room before reaching the microphone or ear.

"We catch the sound as it arrives at each microphone, backtrack to figure out where it came from, and then, in essence, we suppress any sound that couldn't have come from where the person is sitting," says Mr McElveen.

The effect is comparable in certain respects to when a camera focusses on one subject and blurs out the foreground and background.

"The results don't sound crystal clear when you can only use a very noisy recording to learn from, but they're still stunning."

The technology had its first real-world forensic use in a US murder case, where the evidence it was able to provide proved central to the convictions.

[...] Since then, other government laboratories, including in the UK, have put it through a battery of tests. The company is now marketing the technology to the US military, which has used it to analyse sonar signals.

[...] Eventually it aims to introduce tailored versions of its product for use in audio recording kit, voice interfaces for cars, smart speakers, augmented and virtual reality, sonar and hearing aid devices.

So, for example, if you speak to your car or smart speaker it wouldn't matter if there was a lot of noise going on around you, the device would still be able to make out what you were saying.

[...] "The math in all our tests shows remarkable similarities with human hearing. There's little oddities about what our algorithm can do, and how accurately it can do it, that are astonishingly similar to some of the oddities that exist in human hearing," says McElveen.

"We suspect that the human brain may be using the same math - that in solving the cocktail party problem, we may have stumbled upon what's really happening in the brain."


Original Submission