Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The shambling corpse of Steve Jobs lumbers forth, heeding not the end of October! How will you drive him away?

  • Flash running on an Android phone, in denial of his will
  • Zune, or another horror from darkest Redmond
  • Newton, HyperCard, or some other despised interim Apple product
  • BeOS, the abomination from across the sea
  • Macintosh II with expansion slots, in violation of his ancient decree
  • Tow his car for parking in a handicap space without a permit
  • Oncology textbook—without rounded corners
  • Some of us are still in mourning, you insensitive clod!

[ Results | Polls ]
Comments:23 | Votes:55

posted by janrinok on Friday September 27, @09:12PM   Printer-friendly
from the amazing-but-why-did-you-do-it dept.

Linux boots in 4.76 days on the Intel 4004

Historic 4-bit microprocessor from 1971 can execute Linux commands over days or weeks.

Hardware hacker Dmitry Grinberg recently achieved what might sound impossible: booting Linux on the Intel 4004, the world's first commercial microprocessor. With just 2,300 transistors and an original clock speed of 740 kHz, the 1971 CPU is incredibly primitive by modern standards. And it's slow—it takes about 4.76 days for the Linux kernel to boot.

Initially designed for a Japanese calculator called the Busicom 141-PF, the 4-bit 4004 found limited use in commercial products of the 1970s [...]

[....] If you're skeptical that this feat is possible with a raw 4004, you're right: The 4004 itself is far too limited to run Linux directly. Instead, Grinberg created a solution that is equally impressive: an emulator that runs on the 4004 and emulates a MIPS R3000 processor—the architecture used in the DECstation 2100 workstation that Linux was originally ported to.

If it can run a C compiler, it can probably run DOOM.

See Also:


Original Submission

posted by janrinok on Friday September 27, @04:29PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

by University of Texas at Dallas

In a study published July 29 in Advanced Materials, University of Texas at Dallas researchers found that X-rays of the kidneys using gold nanoparticles as a contrast agent might be more accurate in detecting kidney disease than standard laboratory blood tests. Based on their study in mice, they also found that caution may be warranted in employing renal-clearable nanomedicines to patients with compromised kidneys.

Before administering renal-clearable drugs, doctors routinely check a patient's kidney function by testing their blood urea nitrogen (BUN) and creatinine (Cr) levels. With the increasing use of engineered nanoparticles to deliver payloads of drugs or imaging agents to the body, an important question is how the nanoparticles' movement and elimination through the kidney is affected by kidney damage. Can traditional biomarkers like BUN and Cr accurately predict how well—or how poorly—such nanoparticles will move through the kidneys?

The UT Dallas researchers found that in mice with severely injured kidneys caused by the drug cisplatin, in which BUN and Cr levels were 10 times normal, nanoparticle transport through the kidneys was slowed down significantly, a situation that caused the nanoparticles to stay in the kidneys longer.

In mildly injured kidneys, however, in which BUN and Cr levels were only four to five times higher than normal, the transport and retention of gold nanoparticles couldn't be predicted by those tests.

On the other hand, the amount of gold nanoparticle accumulation seen on X-rays did correlate strongly with the degree of kidney damage.

"While our findings emphasize the need for caution when using these advanced treatments in patients with compromised kidneys, they also highlight the potential of gold nanoparticles as a noninvasive way to assess kidney injuries using X-ray imaging or other techniques that correlate with gold accumulation in the kidneys," said Dr. Mengxiao Yu, a corresponding author of the study and a research associate professor of chemistry and biochemistry in the School of Natural Sciences and Mathematics.

Chemistry and biochemistry research scientist Xuhui Ning BS'14, Ph.D.'19 is lead author of the study, and Dr. Jie Zheng, professor of chemistry and biochemistry and a Distinguished Chair in Natural Sciences and Mathematics, is a corresponding author. Other contributors are affiliated with UT Southwestern Medical Center and Vanderbilt University Medical Center.

More information: Xuhui Ning et al, Gold Nanoparticle Transport in the Injured Kidneys with Elevated Renal Function Biomarkers, Advanced Materials (2024). DOI: 10.1002/adma.202402479

Journal information: Advanced Materials


Original Submission

posted by janrinok on Friday September 27, @11:42AM   Printer-friendly
from the weakest-link dept.

https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user's long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September.

[...] Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations.

[...] The attack isn't possible through the ChatGPT web interface, thanks to an API OpenAI rolled out last year.

[...] OpenAI provides guidance here for managing the memory tool and specific memories stored in it. Company representatives didn't respond to an email asking about its efforts to prevent other hacks that plant false memories.


Original Submission

posted by hubie on Friday September 27, @06:56AM   Printer-friendly
from the I-want-my-own-PNT-constellation dept.

Arthur T Knackerbracket has processed the following story:

Two of the world's satellite positioning service constellations reached important milestones this week, after the European Space Agency and China's Satellite Navigation Office each launched its own pair of satellites.

Europe's sats were carried by a SpaceX Falcon 9 rocket that left Florida's Kennedy Space Center on September 18. A day later, China's birds rode a Long March 3B that launched from the Xichang Satellite Launch Center in Sichuan province.

China's sats were the 63rd and 64th members of its Beidou constellation, which currently has 50 operating satellites.

This pair were the last of China's third-generation navigation-sat design. Local media reported that the two satellites are spares in case others falter, and that they include some tech that is expected to be included in fourth-gen sats.

[...] Europe's launch delivered the 31st and 32nd members of its Galileo constellation into space.

"With the deployment of these two satellites, Galileo completes its constellation as designed, reaching the required operational satellites plus one spare per orbital plane," proclaimed ESA director of navigation Javier Benedicto.


Original Submission

posted by hubie on Friday September 27, @02:11AM   Printer-friendly

https://arstechnica.com/security/2024/09/nist-proposes-barring-some-of-the-most-nonsensical-password-rules/

The National Institute of Standards and Technology (NIST), the federal body that sets technology standards for governmental agencies, standards organizations, and private companies, has proposed barring some of the most vexing and nonsensical password requirements. Chief among them: mandatory resets, required or restricted use of certain characters, and the use of security questions.

Choosing strong passwords and storing them safely is one of the most challenging parts of a good cybersecurity regimen. More challenging still is complying with password rules imposed by employers, federal agencies, and providers of online services. Frequently, the rules—ostensibly to enhance security hygiene—actually undermine it. And yet, the nameless rulemakers impose the requirements anyway.

[...] A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.


Original Submission

posted by janrinok on Thursday September 26, @09:28PM   Printer-friendly
from the Keeping-Up dept.

(Source: A Register article mentioned the new XEON having "MRDIMM" capability. "What is MRDIMM?")

Keeping up with the latest in hardware is hard, and in the early turn of the century there was a new technology in every magazine on the rack.

Today, we've hit some fatigue and just don't keep up as much. Right? :-) Anyway, while most of us have heard of Dell's (and Lenovo's) proposal for CAMM modules to replace the multi-stick SO-DIMM sockets, servers are getting a new standard, too: M(C)RDIMMs -- Multiplexed (Combined) Rank Dual Inline Memory Modules.

Some outtakes from product briefs, such as Micron's,

  • DDR5 physical and electrical standards
  • up to 256GB modules
  • increased <everything that makes it good>

By implementing DDR5 physical and electrical standards, MRDIMM technology delivers a memory advancement that allows scaling of both bandwidth and capacity per core to future-proof compute systems and meets the expanding demands of data center workloads. MRDIMMs provide the following advantages over RDIMMs: 2

  • Up to 39% increase in effective memory bandwidth2
  • Greater than 15% better bus efficiency2
  • Up to 40% latency improvements compared to RDIMMs3

MRDIMMs support a wide capacity range from 32GB to 256GB in standard and tall form factors (TFF), which are suitable for high-performance 1U and 2U servers. The improved thermal design of TFF modules reduces DRAM temperatures by up to 20 degrees Celsius at the same power and airflow, [...] enabling more efficient cooling capabilities in data centers and optimizing total system task energy for memory-intensive workloads. Micron's industry-leading memory design and process technology using 32Gb DRAM die enables 256GB TFF MRDIMMs to have the same power envelope as 128GB TFF MRDIMMs using 16Gb die. A 256GB TFF MRDIMM provides a 35% improvement in performance over similar-capacity TSV RDIMMs at the maximum data rate.

And SK Hynix has their own variety, touting bandwidth of 8.8GB/s (ove DDR5's 6.4GB/s).

New to 2024, shipping H2, it seems. Keep up with the times! Grow your RAM modules. Taller (literally).


Original Submission

posted by hubie on Thursday September 26, @04:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Back in June, the FCC proposed a significant rule change that would require carriers to unlock all phones within 60 days of activation. At the time, the FCC was seeking public comment on the proposal, with plans to vote on whether to pursue the issue in early July. Since then, the proposal has been unanimously approved by the five-member commission, and the plan marches forward. To be clear, this doesn’t mean a new unlock policy is happening anytime soon; it just means that the FCC will continue to actively pursue these regulatory changes. Unsurprisingly, AT&T and T-Mobile have both spoken up against the change.

AT&T has indicated that the rule changes could negatively affect its ability to offer affordable devices, though that’s about the extent of its opposition so far. T-Mobile has been considerably more vocal. The “Uncarrier” has not only made it clear that this change could negatively impact their device payment plans and other services, but it has also gone so far as to imply that the change might cause the carrier to give up on payment plans altogether (as first reported by Broadband Breakfast). Furthermore, the carrier questions whether the FCC even has the authorization to pursue such a change.

[...] You might notice that I’ve yet to mention Verizon, and that’s for good reason. Big Red is the only major carrier vocally in support of the change. As you likely guessed, the reason isn’t out of the kindness of their hearts.

Back in 2008, the FCC reached an agreement with Verizon regarding the use of the 700MHz spectrum, with the carrier agreeing to prompt device unlocks. In 2019, the FCC agreed to implement a 60-day unlocking window to help Verizon combat potential fraud around its payment plans and special deal pricing. In other words, Verizon is already abiding by this change, so it loses nothing by supporting it—in fact, it might even have something to gain.

Right now, many carriers, both prepaid and postpaid, offer free trials through eSIM. While AT&T and T-Mobile limit these kinds of trials due to their current unlocking policies, it’s much easier to try out a different network while still keeping your Verizon phone and subscription. This means a Verizon customer has a greater chance to shop for other networks than those on another carrier, increasing their chances of being lured away by a competitor. If all carriers adhere to the same 60-day window, the playing field becomes level.


Original Submission

posted by hubie on Thursday September 26, @11:57AM   Printer-friendly
from the crushing-beetles dept.

VW is considering axing as many as 30,000 jobs as it scrambles to save billions of euros amid a slowdown in the car market, German media has reported:

The carmaker recently announced it could close some of its German factories for the first time in history as it struggles to reinvent itself for the electric era.

Analysts at Jefferies said VW is considering closing two to three facilities, with as many as five German sites under threat, putting 15,000 jobs at risk.

[...] A VW spokesman said: "We do not confirm the figure. One thing is clear: Volkswagen has to reduce its costs at its German sites.

R&D will likely be hit hard:

While Volkswagen is staying tight-lipped on specifics, Manager Magazin suggested research and development could take a massive hit. If their numbers pan out, roughly 4,000 to 6,000 R&D employees could be cut from the current number of around 13,000.

Previously: VW Turns on Germany as China Targets Europe's EV Blunders


Original Submission

posted by hubie on Thursday September 26, @07:14AM   Printer-friendly

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-electronic.html

Before Electronic Arts (EA) was the publishing juggernaut that it is today, it was just one of dozens of software publishers putting out titles for various home computers, including the IBM PC. EA was founded in 1982 by Trip Hawkins, who would go on to create the ultimately unsuccessful 3DO game console. In the mid-1980's, EA was perhaps most famous for their paint program, Deluxe Paint, which became a popular graphics tool for the whole computer gaming industry.

Unlike the companies we have covered to date, EA is mostly widely known for their games, not their copy protection schemes. EA is famous enough that a long segue into their corporate history isn't really necessary - you can just read the Wikipedia entry.

EA wasn't selling their copy protection technology, so there are no flashy advertisements extolling its virtues or many articles discussing it at all. All that is left to talk about is the protection itself.


Original Submission

posted by hubie on Thursday September 26, @02:31AM   Printer-friendly

WHOIS data is unreliable. So why is it used in TLS certificate applications?:

Certificate authorities and browser makers are planning to end the use of WHOIS data verifying domain ownership following a report that demonstrated how threat actors could abuse the process to obtain fraudulently issued TLS certificates.

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. These credentials are issued by any one of hundreds of CAs (certificate authorities) to domain owners. The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved.

Researchers from security firm watchTowr recently demonstrated how threat actors could abuse the rule to obtain fraudulently issued certificates for domains they didn't own. The security failure resulted from a lack of uniform rules for determining the validity of sites claiming to provide official WHOIS records.

[...] The research didn't escape the notice of the CA/Browser Forum (CAB Forum). On Monday, a member representing Google proposed ending the reliance on WHOIS data for domain ownership verification "in light of recent events where research from watchTowr Labs demonstrated how threat actors could exploit WHOIS to obtain fraudulently issued TLS certificates."

The formal proposal calls for reliance on WHOIS data to "sunset" in early November. It establishes specifically that "CAs MUST NOT rely on WHOIS to identify Domain Contacts" and that "Effective November 1, 2024, validations using this [email verification] method MUST NOT rely on WHOIS to identify Domain Contact information."

Since Monday's submission, more than 50 follow-up comments have been posted. Many of the responses expressed support for the proposed change. Others have questioned the need for a change as proposed, given that the security failure watchTowr uncovered is known to affect only a single top-level domain.

[...] The proposed changes are formally in the discussion phase of deliberations. It's unclear when formal voting on the change will begin.

Previously: Rogue WHOIS Server Gives Researcher Superpowers No One Should Ever Have


Original Submission

posted by hubie on Wednesday September 25, @09:48PM   Printer-friendly
from the lies-damn-lies-statistics-and-pundits dept.

We are just a few weeks away from the general election in the United States and many publications provide daily updates to election forecasts. One of the most well-known forecasting systems was developed by Nate Silver, originally for the website FiveThirtyEight. Although Silver's model is quite sophisticated and incorporates a considerable amount of data beyond polls, other sites like RealClearPolitics just use a simple average of recent polls. Does all of the complexity of models like Silver's actually improve forecasts, and can we demonstrate that they're superior to a simple average of polls?

Pre-election polls are a bit like a science project that uses a lot of sensors to measure the state of a single system. There's a delay between the time a sensor is polled for data and when it returns a result, so the project uses many sensors to get more frequent updates. However, the electronics shop had a limited quantity of the highest quality sensor, so a lot of other sensors were used that have a larger bias, less accuracy, or use different methods to measure the same quantity. The science project incorporates the noisy data from the heterogeneous sensors to try to produce the most accurate estimate of the state of the system.

Polls are similar to my noisy sensor analogy in that each poll has its own unique methodology, has a different margin of error related to sample size, and may have what Silver calls "house effects" that may result in a tendency for results from polling firms to favor some candidates or political parties. Some of the more complex election forecasting systems like Silver's model attempt to correct for the bias and give more weight to polls with methodologies that are considered to have better polling practices and that use larger sample sizes.

The purpose of the election forecasts is not to take a snapshot of the race at a particular point in time, but instead to forecast the results on election day. For example, after a political party officially selects its presidential candidate at the party's convention, the candidate tends to receive a temporary boost in the polls, which is known as a "post-convention bounce". Although this effect is well-documented through many election cycles, it is temporary, and polls taken during this period tend to overestimate the actual support the candidate will receive on election day. Many forecast models try to adjust for this bias when incorporating polls taken shortly after the convention.

Election models also often incorporate "fundamentals" such as approval ratings and the tendency of a strong economy to favor incumbents. This information can be used separately to predict the outcome of elections or incorporated into a model along with polling data. Some forecast models like Silver's model also incorporate polls from other states that are similar to the state that is being forecasted and data from national polls to try to produce a more accurate forecast and smooth out the noise from individual polls. These models may also incorporate past voting trends, expert ratings of races, and data from prediction markets. The end result is a model that is very complex and incorporates a large amount of data. But does it actually provide more accurate forecasts?

Unusual behaviors have been noted with some models, such as the tails in Silver's model that tended to include some very unusual outcomes. On the other hand, many models predicted that it was nearly certain that Hillary Clinton would defeat Donald Trump in the 2016 election, perhaps underestimating the potential magnitude of polling errors, leading to tails that weren't heavy enough. Election forecasters have to decide what factors to include in their models and how heavily to weight them, sometimes drawing criticism when their models appear to be an outlier. Presidential elections occur only once every four years in the United States, so there are more fundamental questions about whether there's even enough data to verify the accuracy of forecast models. There may even be some evidence of a feedback, where election forecast models could actually influence election results.

Whether the goal is to forecast a presidential election or project a player's statistics in an upcoming baseball season, sometimes even the most complex of forecasting systems struggle to outperform simple prediction models. I'm not interested in discussions about politics and instead pose a fundamental data science question: does all of the complexity of election models like Nate Silver's really make a meaningful difference, or is a simple average of recent polls just as good of a forecast?


Original Submission

posted by hubie on Wednesday September 25, @05:03PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Our planet is choking on plastics. Some of the worst offenders, which can take decades to degrade in landfills, are polypropylene—which is used for things such as food packaging and bumpers—and polyethylene, found in plastic bags, bottles, toys, and even mulch.

Polypropylene and polyethylene can be recycled, but the process can be difficult and often produces large quantities of the greenhouse gas methane. They are both polyolefins, which are the products of polymerizing ethylene and propylene, raw materials that are mainly derived from fossil fuels. The bonds of polyolefins are also notoriously hard to break.

Now, researchers at the University of California, Berkeley have come up with a method of recycling these polymers that uses catalysts that easily break their bonds, converting them into propylene and isobutylene, which are gasses at room temperature. Those gasses can then be recycled into new plastics.

“Because polypropylene and polyethylene are among the most difficult and expensive plastics to separate from each other in a mixed waste stream, it is crucial that [a recycling] process apply to both polyolefins,” the research team said in a study recently published in Science.

The recycling process the team used is known as isomerizing ethenolysis, which relies on a catalyst to break down olefin polymer chains into their small molecules. Polyethylene and polypropylene bonds are highly resistant to chemical reactions because both of these polyolefins have long chains of single carbon-carbon bonds. Most polymers have at least one carbon-carbon double bond, which is much easier to break.

[...] The reaction breaks all the carbon-carbon bonds in polyethylene and polypropylene, with the carbon atoms released during the breaking of these bonds ending up attached to molecules of ethylene.“The ethylene is critical to this reaction, as it is a co-reactant,” researcher R.J. Conk, one of the authors of the study, told Ars Technica. “The broken links then react with ethylene, which removes the links from the chain. Without ethylene, the reaction cannot occur.”

The entire chain is catalyzed until polyethylene is fully converted to propylene, and polypropylene is converted to a mixture of propylene and isobutylene.

This method has high selectivity—meaning it produces a large amount of the desired product. That means propylene derived from polyethylene, and both propylene and isobutylene derived from polypropylene. Both of these chemicals are in high demand, since propylene is an important raw material for the chemical industry, while isobutylene is a frequently used monomer in many different polymers, including synthetic rubber and a gasoline additive.

Because plastics are often mixed at recycling centers, the researchers wanted to see what would happen if polypropylene and polyethylene underwent isomerizing ethenolysis together. The reaction was successful, converting the mixture into propylene and isobutylene, with slightly more propylene than isobutylene.

[...] While this recycling method sounds like it could prevent tons upon tons of waste, it will need to be scaled up enormously for this to happen. When the research team increased the scale of the experiment, it produced the same yield, which looks promising for the future. Still, we’ll need to build considerable infrastructure before this could make a dent in our plastic waste.

“We hope that the work described…will lead to practical methods for…[producing] new polymers,” the researchers said in the same study. “By doing so, the demand for production of these essential commodity chemicals starting from fossil carbon sources and the associated greenhouse gas emissions could be greatly reduced.”

Science, 2024. DOI: 10.1126/science.adq731


Original Submission

posted by hubie on Wednesday September 25, @12:15PM   Printer-friendly
from the lawyer-up dept.

https://arstechnica.com/gaming/2024/09/nintendo-the-pokemon-company-sue-palworld-maker-pocketpair/

Nintendo and The Pokemon Company announced they have filed a patent-infringement lawsuit against Pocketpair, the makers of the heavily Pokémon-inspired Palworld. The Tokyo District Court lawsuit seeks an injunction and damages "on the grounds that Palworld infringes multiple patent rights," according to the announcement.
[...]
The many surface similarities between Pokémon and Palworld are readily apparent, even though Pocketpair's game adds many new features over Nintendo's (such as, uh, guns). But making legal hay over even heavy common ground between games can be an uphill battle. That's because copyright law (at least in the US) generally doesn't apply to a game's mere design elements, and only extends to "expressive elements" such as art, character design, and music.

Generally, even blatant rip-offs of successful games are able to make just enough changes to those "expressive" portions to avoid any legal trouble. But Palworld might clear the high legal bar for infringement if the game's 3D character models were indeed lifted almost wholesale from actual Pokémon game files, as some observers have been alleging since January.
[...]
"Palworld is such a different type of game from Pokémon, it's hard to imagine what patents (*not* copyrights) might have been even plausibly infringed," game industry attorney Richard Hoeg posted on social media Wednesday night. "Initial gut reaction is Nintendo may be reaching."

PocketPair CEO Takuro Mizobe told Automaton Media in January that the game had "cleared legal reviews" and that "we have absolutely no intention of infringing upon the intellectual property of other companies."
[...]
Update (Sept. 19, 2024): In a statement posted overnight, Pocketpair said it was currently "unaware of the specific patents we are accused of infringing upon, and have not been notified of such details.
[...]
Pocketpair promises that it "will continue improving Palworld and strive to create a game that our fans can be proud of."


Original Submission

posted by hubie on Wednesday September 25, @05:21AM   Printer-friendly
from the bad-IoT dept.

"The government's malware disabling commands, which interacted with the malware's native functionality, were extensively tested prior to the operation," according to the DOJ:

U.S. authorities have dismantled a massive botnet run by hackers backed by the Chinese government, according to a speech given by FBI director Christopher Wray on Wednesday. The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations.

Wray explained the operation at the Aspen Digital conference and said the hackers work for a Beijing-based company called Integrity Technology Group, which is known to U.S. researchers as Flax Typhoon. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.

The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were "ultimately unsuccessful," according to the law enforcement agency.

About half of the devices hijacked were in the U.S., according to Wray, but there were also devices identified as compromised in South America, Europe, Africa, Southeast Asia, and Australia. And the DOJ noted in a press release that authorities in Australia, Canada, New Zealand, and the UK all helped take down the botnet.

Originally spotted on Schneier on Security.

Related: Chinese Malware Removed From SOHO Routers After FBI Issues Covert Commands


Original Submission

posted by janrinok on Wednesday September 25, @12:34AM   Printer-friendly

Starlink imposes $100 "congestion charge" on new users in parts of US:

New Starlink customers have to pay a $100 "congestion charge" in areas where the satellite broadband network has limited capacity.

"In areas with network congestion, there is an additional one-time charge to purchase Starlink Residential services," a Starlink FAQ says. "This fee will only apply if you are purchasing or activating a new service plan. If you change your Service address or Service Plan at a later date, you may be charged the congestion fee."

The charge is unwelcome for anyone wanting Starlink service in a congested area, but it could help prevent the capacity crunch from getting worse by making people think twice about signing up. The SpaceX-owned Internet service provider also seems to anticipate that people who sign up for service in congested areas may change their minds after trying it out for a few weeks.

"Our intention is to no longer charge this fee to new customers as soon as network capacity improves. If you're not satisfied with Starlink and return it within the 30-day return window, the charge will be refunded," the company said.

There is some corresponding good news for people in areas with more Starlink capacity. Starlink "regional savings," introduced a few months ago, provides a $100 service credit in parts of the US "where Starlink has abundant network availability." The credit is $200 in parts of Canada with abundant network availability.

The congestion charge was reported by PCMag on September 13, after being noticed by users of the Starlink subreddit. "The added fee appears to pop up in numerous states, particularly in the south and eastern US, such as Texas, Florida, Kansas, Ohio and Virginia, among others, which have slower Starlink speeds due to the limited network capacity," PCMag noted.

Speed test data showed in 2022 that Starlink speeds dropped significantly as more people signed up for the service, a fact cited by the Federal Communications Commission when it rejected $886 million worth of broadband deployment grants for the company.

This isn't the first time Starlink has varied pricing based on regional congestion. In February 2023, Starlink decided that people in limited-capacity areas would pay $120 a month, and people in excess-capacity areas would pay $90 a month.


Original Submission