Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The shambling corpse of Steve Jobs lumbers forth, heeding not the end of October! How will you drive him away?

  • Flash running on an Android phone, in denial of his will
  • Zune, or another horror from darkest Redmond
  • Newton, HyperCard, or some other despised interim Apple product
  • BeOS, the abomination from across the sea
  • Macintosh II with expansion slots, in violation of his ancient decree
  • Tow his car for parking in a handicap space without a permit
  • Oncology textbook—without rounded corners
  • Some of us are still in mourning, you insensitive clod!

[ Results | Polls ]
Comments:23 | Votes:55

posted by janrinok on Thursday September 26, @09:28PM   Printer-friendly
from the Keeping-Up dept.

(Source: A Register article mentioned the new XEON having "MRDIMM" capability. "What is MRDIMM?")

Keeping up with the latest in hardware is hard, and in the early turn of the century there was a new technology in every magazine on the rack.

Today, we've hit some fatigue and just don't keep up as much. Right? :-) Anyway, while most of us have heard of Dell's (and Lenovo's) proposal for CAMM modules to replace the multi-stick SO-DIMM sockets, servers are getting a new standard, too: M(C)RDIMMs -- Multiplexed (Combined) Rank Dual Inline Memory Modules.

Some outtakes from product briefs, such as Micron's,

  • DDR5 physical and electrical standards
  • up to 256GB modules
  • increased <everything that makes it good>

By implementing DDR5 physical and electrical standards, MRDIMM technology delivers a memory advancement that allows scaling of both bandwidth and capacity per core to future-proof compute systems and meets the expanding demands of data center workloads. MRDIMMs provide the following advantages over RDIMMs: 2

  • Up to 39% increase in effective memory bandwidth2
  • Greater than 15% better bus efficiency2
  • Up to 40% latency improvements compared to RDIMMs3

MRDIMMs support a wide capacity range from 32GB to 256GB in standard and tall form factors (TFF), which are suitable for high-performance 1U and 2U servers. The improved thermal design of TFF modules reduces DRAM temperatures by up to 20 degrees Celsius at the same power and airflow, [...] enabling more efficient cooling capabilities in data centers and optimizing total system task energy for memory-intensive workloads. Micron's industry-leading memory design and process technology using 32Gb DRAM die enables 256GB TFF MRDIMMs to have the same power envelope as 128GB TFF MRDIMMs using 16Gb die. A 256GB TFF MRDIMM provides a 35% improvement in performance over similar-capacity TSV RDIMMs at the maximum data rate.

And SK Hynix has their own variety, touting bandwidth of 8.8GB/s (ove DDR5's 6.4GB/s).

New to 2024, shipping H2, it seems. Keep up with the times! Grow your RAM modules. Taller (literally).


Original Submission

posted by hubie on Thursday September 26, @04:42PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Back in June, the FCC proposed a significant rule change that would require carriers to unlock all phones within 60 days of activation. At the time, the FCC was seeking public comment on the proposal, with plans to vote on whether to pursue the issue in early July. Since then, the proposal has been unanimously approved by the five-member commission, and the plan marches forward. To be clear, this doesn’t mean a new unlock policy is happening anytime soon; it just means that the FCC will continue to actively pursue these regulatory changes. Unsurprisingly, AT&T and T-Mobile have both spoken up against the change.

AT&T has indicated that the rule changes could negatively affect its ability to offer affordable devices, though that’s about the extent of its opposition so far. T-Mobile has been considerably more vocal. The “Uncarrier” has not only made it clear that this change could negatively impact their device payment plans and other services, but it has also gone so far as to imply that the change might cause the carrier to give up on payment plans altogether (as first reported by Broadband Breakfast). Furthermore, the carrier questions whether the FCC even has the authorization to pursue such a change.

[...] You might notice that I’ve yet to mention Verizon, and that’s for good reason. Big Red is the only major carrier vocally in support of the change. As you likely guessed, the reason isn’t out of the kindness of their hearts.

Back in 2008, the FCC reached an agreement with Verizon regarding the use of the 700MHz spectrum, with the carrier agreeing to prompt device unlocks. In 2019, the FCC agreed to implement a 60-day unlocking window to help Verizon combat potential fraud around its payment plans and special deal pricing. In other words, Verizon is already abiding by this change, so it loses nothing by supporting it—in fact, it might even have something to gain.

Right now, many carriers, both prepaid and postpaid, offer free trials through eSIM. While AT&T and T-Mobile limit these kinds of trials due to their current unlocking policies, it’s much easier to try out a different network while still keeping your Verizon phone and subscription. This means a Verizon customer has a greater chance to shop for other networks than those on another carrier, increasing their chances of being lured away by a competitor. If all carriers adhere to the same 60-day window, the playing field becomes level.


Original Submission

posted by hubie on Thursday September 26, @11:57AM   Printer-friendly
from the crushing-beetles dept.

VW is considering axing as many as 30,000 jobs as it scrambles to save billions of euros amid a slowdown in the car market, German media has reported:

The carmaker recently announced it could close some of its German factories for the first time in history as it struggles to reinvent itself for the electric era.

Analysts at Jefferies said VW is considering closing two to three facilities, with as many as five German sites under threat, putting 15,000 jobs at risk.

[...] A VW spokesman said: "We do not confirm the figure. One thing is clear: Volkswagen has to reduce its costs at its German sites.

R&D will likely be hit hard:

While Volkswagen is staying tight-lipped on specifics, Manager Magazin suggested research and development could take a massive hit. If their numbers pan out, roughly 4,000 to 6,000 R&D employees could be cut from the current number of around 13,000.

Previously: VW Turns on Germany as China Targets Europe's EV Blunders


Original Submission

posted by hubie on Thursday September 26, @07:14AM   Printer-friendly

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-electronic.html

Before Electronic Arts (EA) was the publishing juggernaut that it is today, it was just one of dozens of software publishers putting out titles for various home computers, including the IBM PC. EA was founded in 1982 by Trip Hawkins, who would go on to create the ultimately unsuccessful 3DO game console. In the mid-1980's, EA was perhaps most famous for their paint program, Deluxe Paint, which became a popular graphics tool for the whole computer gaming industry.

Unlike the companies we have covered to date, EA is mostly widely known for their games, not their copy protection schemes. EA is famous enough that a long segue into their corporate history isn't really necessary - you can just read the Wikipedia entry.

EA wasn't selling their copy protection technology, so there are no flashy advertisements extolling its virtues or many articles discussing it at all. All that is left to talk about is the protection itself.


Original Submission

posted by hubie on Thursday September 26, @02:31AM   Printer-friendly

WHOIS data is unreliable. So why is it used in TLS certificate applications?:

Certificate authorities and browser makers are planning to end the use of WHOIS data verifying domain ownership following a report that demonstrated how threat actors could abuse the process to obtain fraudulently issued TLS certificates.

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. These credentials are issued by any one of hundreds of CAs (certificate authorities) to domain owners. The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One "base requirement rule" allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved.

Researchers from security firm watchTowr recently demonstrated how threat actors could abuse the rule to obtain fraudulently issued certificates for domains they didn't own. The security failure resulted from a lack of uniform rules for determining the validity of sites claiming to provide official WHOIS records.

[...] The research didn't escape the notice of the CA/Browser Forum (CAB Forum). On Monday, a member representing Google proposed ending the reliance on WHOIS data for domain ownership verification "in light of recent events where research from watchTowr Labs demonstrated how threat actors could exploit WHOIS to obtain fraudulently issued TLS certificates."

The formal proposal calls for reliance on WHOIS data to "sunset" in early November. It establishes specifically that "CAs MUST NOT rely on WHOIS to identify Domain Contacts" and that "Effective November 1, 2024, validations using this [email verification] method MUST NOT rely on WHOIS to identify Domain Contact information."

Since Monday's submission, more than 50 follow-up comments have been posted. Many of the responses expressed support for the proposed change. Others have questioned the need for a change as proposed, given that the security failure watchTowr uncovered is known to affect only a single top-level domain.

[...] The proposed changes are formally in the discussion phase of deliberations. It's unclear when formal voting on the change will begin.

Previously: Rogue WHOIS Server Gives Researcher Superpowers No One Should Ever Have


Original Submission

posted by hubie on Wednesday September 25, @09:48PM   Printer-friendly
from the lies-damn-lies-statistics-and-pundits dept.

We are just a few weeks away from the general election in the United States and many publications provide daily updates to election forecasts. One of the most well-known forecasting systems was developed by Nate Silver, originally for the website FiveThirtyEight. Although Silver's model is quite sophisticated and incorporates a considerable amount of data beyond polls, other sites like RealClearPolitics just use a simple average of recent polls. Does all of the complexity of models like Silver's actually improve forecasts, and can we demonstrate that they're superior to a simple average of polls?

Pre-election polls are a bit like a science project that uses a lot of sensors to measure the state of a single system. There's a delay between the time a sensor is polled for data and when it returns a result, so the project uses many sensors to get more frequent updates. However, the electronics shop had a limited quantity of the highest quality sensor, so a lot of other sensors were used that have a larger bias, less accuracy, or use different methods to measure the same quantity. The science project incorporates the noisy data from the heterogeneous sensors to try to produce the most accurate estimate of the state of the system.

Polls are similar to my noisy sensor analogy in that each poll has its own unique methodology, has a different margin of error related to sample size, and may have what Silver calls "house effects" that may result in a tendency for results from polling firms to favor some candidates or political parties. Some of the more complex election forecasting systems like Silver's model attempt to correct for the bias and give more weight to polls with methodologies that are considered to have better polling practices and that use larger sample sizes.

The purpose of the election forecasts is not to take a snapshot of the race at a particular point in time, but instead to forecast the results on election day. For example, after a political party officially selects its presidential candidate at the party's convention, the candidate tends to receive a temporary boost in the polls, which is known as a "post-convention bounce". Although this effect is well-documented through many election cycles, it is temporary, and polls taken during this period tend to overestimate the actual support the candidate will receive on election day. Many forecast models try to adjust for this bias when incorporating polls taken shortly after the convention.

Election models also often incorporate "fundamentals" such as approval ratings and the tendency of a strong economy to favor incumbents. This information can be used separately to predict the outcome of elections or incorporated into a model along with polling data. Some forecast models like Silver's model also incorporate polls from other states that are similar to the state that is being forecasted and data from national polls to try to produce a more accurate forecast and smooth out the noise from individual polls. These models may also incorporate past voting trends, expert ratings of races, and data from prediction markets. The end result is a model that is very complex and incorporates a large amount of data. But does it actually provide more accurate forecasts?

Unusual behaviors have been noted with some models, such as the tails in Silver's model that tended to include some very unusual outcomes. On the other hand, many models predicted that it was nearly certain that Hillary Clinton would defeat Donald Trump in the 2016 election, perhaps underestimating the potential magnitude of polling errors, leading to tails that weren't heavy enough. Election forecasters have to decide what factors to include in their models and how heavily to weight them, sometimes drawing criticism when their models appear to be an outlier. Presidential elections occur only once every four years in the United States, so there are more fundamental questions about whether there's even enough data to verify the accuracy of forecast models. There may even be some evidence of a feedback, where election forecast models could actually influence election results.

Whether the goal is to forecast a presidential election or project a player's statistics in an upcoming baseball season, sometimes even the most complex of forecasting systems struggle to outperform simple prediction models. I'm not interested in discussions about politics and instead pose a fundamental data science question: does all of the complexity of election models like Nate Silver's really make a meaningful difference, or is a simple average of recent polls just as good of a forecast?


Original Submission

posted by hubie on Wednesday September 25, @05:03PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Our planet is choking on plastics. Some of the worst offenders, which can take decades to degrade in landfills, are polypropylene—which is used for things such as food packaging and bumpers—and polyethylene, found in plastic bags, bottles, toys, and even mulch.

Polypropylene and polyethylene can be recycled, but the process can be difficult and often produces large quantities of the greenhouse gas methane. They are both polyolefins, which are the products of polymerizing ethylene and propylene, raw materials that are mainly derived from fossil fuels. The bonds of polyolefins are also notoriously hard to break.

Now, researchers at the University of California, Berkeley have come up with a method of recycling these polymers that uses catalysts that easily break their bonds, converting them into propylene and isobutylene, which are gasses at room temperature. Those gasses can then be recycled into new plastics.

“Because polypropylene and polyethylene are among the most difficult and expensive plastics to separate from each other in a mixed waste stream, it is crucial that [a recycling] process apply to both polyolefins,” the research team said in a study recently published in Science.

The recycling process the team used is known as isomerizing ethenolysis, which relies on a catalyst to break down olefin polymer chains into their small molecules. Polyethylene and polypropylene bonds are highly resistant to chemical reactions because both of these polyolefins have long chains of single carbon-carbon bonds. Most polymers have at least one carbon-carbon double bond, which is much easier to break.

[...] The reaction breaks all the carbon-carbon bonds in polyethylene and polypropylene, with the carbon atoms released during the breaking of these bonds ending up attached to molecules of ethylene.“The ethylene is critical to this reaction, as it is a co-reactant,” researcher R.J. Conk, one of the authors of the study, told Ars Technica. “The broken links then react with ethylene, which removes the links from the chain. Without ethylene, the reaction cannot occur.”

The entire chain is catalyzed until polyethylene is fully converted to propylene, and polypropylene is converted to a mixture of propylene and isobutylene.

This method has high selectivity—meaning it produces a large amount of the desired product. That means propylene derived from polyethylene, and both propylene and isobutylene derived from polypropylene. Both of these chemicals are in high demand, since propylene is an important raw material for the chemical industry, while isobutylene is a frequently used monomer in many different polymers, including synthetic rubber and a gasoline additive.

Because plastics are often mixed at recycling centers, the researchers wanted to see what would happen if polypropylene and polyethylene underwent isomerizing ethenolysis together. The reaction was successful, converting the mixture into propylene and isobutylene, with slightly more propylene than isobutylene.

[...] While this recycling method sounds like it could prevent tons upon tons of waste, it will need to be scaled up enormously for this to happen. When the research team increased the scale of the experiment, it produced the same yield, which looks promising for the future. Still, we’ll need to build considerable infrastructure before this could make a dent in our plastic waste.

“We hope that the work described…will lead to practical methods for…[producing] new polymers,” the researchers said in the same study. “By doing so, the demand for production of these essential commodity chemicals starting from fossil carbon sources and the associated greenhouse gas emissions could be greatly reduced.”

Science, 2024. DOI: 10.1126/science.adq731


Original Submission

posted by hubie on Wednesday September 25, @12:15PM   Printer-friendly
from the lawyer-up dept.

https://arstechnica.com/gaming/2024/09/nintendo-the-pokemon-company-sue-palworld-maker-pocketpair/

Nintendo and The Pokemon Company announced they have filed a patent-infringement lawsuit against Pocketpair, the makers of the heavily Pokémon-inspired Palworld. The Tokyo District Court lawsuit seeks an injunction and damages "on the grounds that Palworld infringes multiple patent rights," according to the announcement.
[...]
The many surface similarities between Pokémon and Palworld are readily apparent, even though Pocketpair's game adds many new features over Nintendo's (such as, uh, guns). But making legal hay over even heavy common ground between games can be an uphill battle. That's because copyright law (at least in the US) generally doesn't apply to a game's mere design elements, and only extends to "expressive elements" such as art, character design, and music.

Generally, even blatant rip-offs of successful games are able to make just enough changes to those "expressive" portions to avoid any legal trouble. But Palworld might clear the high legal bar for infringement if the game's 3D character models were indeed lifted almost wholesale from actual Pokémon game files, as some observers have been alleging since January.
[...]
"Palworld is such a different type of game from Pokémon, it's hard to imagine what patents (*not* copyrights) might have been even plausibly infringed," game industry attorney Richard Hoeg posted on social media Wednesday night. "Initial gut reaction is Nintendo may be reaching."

PocketPair CEO Takuro Mizobe told Automaton Media in January that the game had "cleared legal reviews" and that "we have absolutely no intention of infringing upon the intellectual property of other companies."
[...]
Update (Sept. 19, 2024): In a statement posted overnight, Pocketpair said it was currently "unaware of the specific patents we are accused of infringing upon, and have not been notified of such details.
[...]
Pocketpair promises that it "will continue improving Palworld and strive to create a game that our fans can be proud of."


Original Submission

posted by hubie on Wednesday September 25, @05:21AM   Printer-friendly
from the bad-IoT dept.

"The government's malware disabling commands, which interacted with the malware's native functionality, were extensively tested prior to the operation," according to the DOJ:

U.S. authorities have dismantled a massive botnet run by hackers backed by the Chinese government, according to a speech given by FBI director Christopher Wray on Wednesday. The botnet malware infected a number of different types of internet-connected devices around the world, including home routers, cameras, digital video recorders, and NAS drives. Those devices were used to help infiltrate sensitive networks related to universities, government agencies, telecommunications providers, and media organizations.

Wray explained the operation at the Aspen Digital conference and said the hackers work for a Beijing-based company called Integrity Technology Group, which is known to U.S. researchers as Flax Typhoon. The botnet was launched in mid-2021, according to the FBI, and infected roughly 260,000 devices as of June 2024.

The operation to dismantle the botnet was coordinated by the FBI, the NSA, and the Cyber National Mission Force (CNMF), according to a press release dated Wednesday. The U.S. Department of Justice received a court order to take control of the botnet infrastructure by sending disabling commands to the malware on infected devices. The hackers tried to counterattack by hitting FBI infrastructure but were "ultimately unsuccessful," according to the law enforcement agency.

About half of the devices hijacked were in the U.S., according to Wray, but there were also devices identified as compromised in South America, Europe, Africa, Southeast Asia, and Australia. And the DOJ noted in a press release that authorities in Australia, Canada, New Zealand, and the UK all helped take down the botnet.

Originally spotted on Schneier on Security.

Related: Chinese Malware Removed From SOHO Routers After FBI Issues Covert Commands


Original Submission

posted by janrinok on Wednesday September 25, @12:34AM   Printer-friendly

Starlink imposes $100 "congestion charge" on new users in parts of US:

New Starlink customers have to pay a $100 "congestion charge" in areas where the satellite broadband network has limited capacity.

"In areas with network congestion, there is an additional one-time charge to purchase Starlink Residential services," a Starlink FAQ says. "This fee will only apply if you are purchasing or activating a new service plan. If you change your Service address or Service Plan at a later date, you may be charged the congestion fee."

The charge is unwelcome for anyone wanting Starlink service in a congested area, but it could help prevent the capacity crunch from getting worse by making people think twice about signing up. The SpaceX-owned Internet service provider also seems to anticipate that people who sign up for service in congested areas may change their minds after trying it out for a few weeks.

"Our intention is to no longer charge this fee to new customers as soon as network capacity improves. If you're not satisfied with Starlink and return it within the 30-day return window, the charge will be refunded," the company said.

There is some corresponding good news for people in areas with more Starlink capacity. Starlink "regional savings," introduced a few months ago, provides a $100 service credit in parts of the US "where Starlink has abundant network availability." The credit is $200 in parts of Canada with abundant network availability.

The congestion charge was reported by PCMag on September 13, after being noticed by users of the Starlink subreddit. "The added fee appears to pop up in numerous states, particularly in the south and eastern US, such as Texas, Florida, Kansas, Ohio and Virginia, among others, which have slower Starlink speeds due to the limited network capacity," PCMag noted.

Speed test data showed in 2022 that Starlink speeds dropped significantly as more people signed up for the service, a fact cited by the Federal Communications Commission when it rejected $886 million worth of broadband deployment grants for the company.

This isn't the first time Starlink has varied pricing based on regional congestion. In February 2023, Starlink decided that people in limited-capacity areas would pay $120 a month, and people in excess-capacity areas would pay $90 a month.


Original Submission

posted by janrinok on Tuesday September 24, @07:51PM   Printer-friendly
from the when-politics-and-science-collide dept.

Arthur T Knackerbracket has processed the following story:

Since its founding in 1954, high-energy physics laboratory CERN has been a flagship for international scientific collaboration. That commitment has been under strain since the Russian invasion of Ukraine in 2022. CERN decided to cut ties with Moscow late last year over deaths resulting from the country's "unlawful use of force" in the ongoing conflict.

With the existing international cooperation agreements now lapsing, the Geneva-based organization is expected to expel hundreds of scientists on November 30 affiliated with Russian institutions, Nature reports. However, CERN will maintain its links with the Joint Institute for Nuclear Research, an intergovernmental center near Moscow.

CERN was founded in the wake of World War II as a place dedicated to the peaceful pursuit of science. The organization currently has 24 member states and, in 2019 alone, hosted about 12,400 users from institutions in more than 70 countries. Russia has never been a full member of CERN, but collaborations first began in 1955, with hundreds of Russia-affiliated scientists contributing to experiments in the ensuing decades. Now, that 60-year history of collaboration, and Russia's long-standing observer status, is ending. As World Nuclear News reported earlier this year:

The decision to end the cooperation agreement was taken in December 2023 when CERN's Council passed a resolution "to terminate the International Cooperation Agreement between CERN and the Russian Federation, together with all related protocols and addenda, with effect from 30 November 2024; To terminate ... all other agreements and experiment memoranda of understanding allowing the participation of the Russian Federation and its national institutes in the CERN scientific programme, with effect from 30 November 2024; AFFIRMS That these measures concern the relationship between CERN and Russian and Belarusian institutes and do not affect the relationship with scientists of Russian nationality affiliated with other institutes." The cooperation agreement with Belarus will come to an end on 27 June, before the Russian one ends.

It's unclear how this decision will impact scientific research at CERN. Russia's 4.5 percent contribution to the combined budget for ongoing experiments at the Large Hadron Collider has already been covered by other collaboration members. Some think the effects will be minimal since researchers have had plenty of time to prepare for the exit. Certain essential staff members have successfully found employment outside of Russia so that they can stay on.

Others are less confident. “It will leave a hole. I think it’s an illusion to believe one can cover that very simply by other scientists,” particle physicist and CMS member Hannes Jung of the German Electron Synchrotron in Hamburg told Nature. He's also a member of the Science4Peace Forum, which opposes restrictions on scientific cooperation.


Original Submission

posted by janrinok on Tuesday September 24, @03:08PM   Printer-friendly

The Arc browser that lets you customize websites had a serious vulnerability:

The Arc browser's 'Boosts' feature would've allowed bad actors to edit a website and add a malicious payload that their target could download to their computer.

One of the feature that separates the Arc browser from its competitors is the ability to customize websites. The feature called "Boosts" allows users to change a website's background color, switch to a font they like or one that makes it easier for them to read and even remove an unwanted elements from the page completely. Their alterations aren't supposed to be be visible to anyone else, but they can share them across devices. Now, Arc's creator, the Browser Company, has admitted that a security researcher found a serious flaw that would've allowed attackers to use Boosts to compromise their targets' systems.

The company used Firebase, which the security researcher known as "xyzeva" described as a "database-as-a-backend service" in their post about the vulnerability, to support several Arc features. For Boosts, in particular, it's used to share and sync customizations across devices. In xyzeva's post, they showed how the browser relies on a creator's identification (creatorID) to load Boosts on a device. They also shared how someone could change that element to their target's identification tag and assign that target Boosts that they had created.

If a bad actor makes a Boost with a malicious payload, for instance, they can just change their creatorID to the creatorID of their intended target. When the intended victim then visits the website on Arc, they could unknowingly download the hacker's malware. And as the researcher explained, it's pretty easy to get user IDs for the browser. A user who refer someone to Arc will share their ID to the recipient, and if they also created an account from a referral, the person who sent it will also get their ID. Users can also share their Boosts with others, and Arc has a page with public Boosts that contain the creatorIDs of the people who made them.

In its post, the Browser Company said xyzeva notified it about the security issue on August 25 and that it issued a fix a day later with the researcher's help.


Original Submission

posted by janrinok on Tuesday September 24, @10:18AM   Printer-friendly

Google employees' attempts to hide messages from investigators might backfire:

Google employees liberally labeled their emails as "privileged and confidential" and spoke "off the record" over chat messages, even after being told to preserve their communications for investigators, lawyers for the Justice Department have told a Virginia court over the past couple of weeks.

That strategy could backfire if the judge in Google's second antitrust trial believes the company intentionally destroyed evidence that would have looked bad for it. The judge could go as far as giving an adverse inference about Google's missing documents, which would mean assuming they would have been bad for Google's case.

Documents shown in court regularly display the words "privileged and confidential" as business executives discuss their work, occasionally with a member of Google's legal team looped in. On Friday, former Google sell-side ad executive Chris LaSala said that wasn't the only strategy Google used. He testified that after being placed on a litigation hold in connection with law enforcers' investigation, Google chat messages had history off by default, and his understanding was that needed to be changed for each individual chat that involved substantive work conversations. Multiple former Google employees testified to never changing the default setting and occasionally having substantive business discussions in chats, though they were largely reserved for casual conversations.

LaSala also used that default to his advantage at times, documents shown by the government in court revealed. In one 2020 chat, an employee asked LaSala if they should email two other Google employees about an issue and, soon after, asked, "Or too sensitive for email so keep on ping?" LaSala responded, instructing the employee to "start a ping with history turned off." In a separate 2020 exchange, LaSala again instructed his employee to "maybe start an off the record ping thread with Duke, you, me."

"It was just how we spoke. Everyone used the phrase 'off the record ping,'" LaSala testified. "My MO was mostly off the record, so old tricks die hard."

Still, LaSala said he "tried to follow the terms of the litigation hold," but he acknowledged he "made a mistake." Shortly after a training about the hold, he recalled receiving a chat from a colleague. Though LaSala said he turned history on, he wasn't sure the first message would be preserved. LaSala said he put that message in an email just in case. In general, LaSala said, "We were really good at documenting ... and to the extent I made a mistake a couple times, it was not intentional."

Brad Bender, another Google ad tech executive who testified earlier in the week, described conversations with colleagues over chat as more akin to "bumping into the hall and saying 'hey we should chat.'" The DOJ also questioned former Google executive Rahul Srinivasan about emails he marked privileged and confidential, asking what legal advice he was seeking in those emails. He said he didn't remember.

Google employees were well aware of how their written words could be used against the company, the DOJ argued, pointing to the company's "Communicate with Care" legal training for employees. In one 2019 email, Srinivasan copied a lawyer on an email to colleagues about an ad tech feature and reminded the group to be careful with their language. "We should be particularly careful when framing something as a 'circumvention,'" he wrote. "We should assume that every document (and email) we generate will likely be seen by regulators." The email was labeled "PRIVILEGED and CONFIDENTIAL."

While the many documents shown by the DOJ demonstrate that Google often discussed business decisions in writing, at other times, they seemed to intentionally leave the documentation sparse. "Keeping the notes limited due to sensitivity of the subject," a 2021 Google document says. "Separate privileged emails will be sent to folks to follow up on explicit [action items]."

"We take seriously our obligations to preserve and produce relevant documents," Google spokesperson Peter Schottenfels said in a statement. "We have for years responded to inquiries and litigation, and we educate our employees about legal privilege. In the DOJ cases alone, we have produced millions of documents including chat messages and documents not covered by legal privilege."

The judge in Google's first antitrust battle with the DOJ over its search business declined to go as far as an adverse inference,even though he ruled against Google in most other ways. Still, he made clear he wasn't "condoning Google's failure to preserve chat evidence" and said, "Any company that puts the onus on employees to identify and preserve relevant evidence does so at its own peril. Google avoided sanctions in this case. It may not be so lucky in the next one."


Original Submission

posted by janrinok on Tuesday September 24, @05:33AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Linux kernel is 33 years old. Its creator, Linus Torvalds, still enjoys an argument or two but is baffled why the debate over Rust has attracted so much heat.

"I'm not sure why Rust has been such a contentious area," Torvalds said during an on-stage chat this week with Dirk Hohndel, Verizon's Head of Open Source.

"It reminds me of when I was young and people were arguing about vi versus Emacs," said the software engineer. Hohndel interjected, "They still are!"

Torvalds laughed, "Maybe they still are! But for some reason, the whole Rust versus C discussion has taken almost religious overtones."

Getting Rust into the Linux kernel has been a hot topic for some time. In 2022, developers were arguing over the language, with some calling the memory safety features of Rust an "insult" to some of the hard work that had gone into the kernel over the years. At the beginning of September, one of the maintainers of the Rust for Linux projects stepped down, citing frustration with "nontechnical nonsense" as a reason for resignation.

During the conversation at the Linux Foundation's Open Source Summit in Vienna this week, Torvalds continued, "Clearly, there are people who just don't like the notion of Rust, and having Rust encroach on their area.

"People have even been talking about the Rust integration being a failure … We've been doing this for a couple of years now so it's way too early to even say that, but I also think that even if it were to become a failure – and I don't think it will – that's how you learn," he said.

"So I see the whole Rust thing as positive, even if the arguments are not necessarily always [so]."

Keen to pull those positives from the row, Torvalds added, "One of the nice parts about Rust has been how it's livened up discussions," before acknowledging, "some of the arguments get nasty, and people do actually - yes - decide 'this is not worth my time,' but at the same time it's kind of interesting, and I think it shows how much people care."

"C is, in the end, a very simple language. It's one of the reasons I enjoy C and why a lot of C programmers enjoy C, even if the other side of that picture is obviously that because it's simple it's also very easy to make mistakes," he argued.

With impressive diplomacy, considering his outbursts of years past, Torvalds went on, "There's a lot of people who are used to the C model, and they don't necessarily like the differences... and that's ok.

"Some people care about specific architectures, and some people like file systems, and that's how it should be. That's how I see Rust."


Original Submission

posted by hubie on Tuesday September 24, @12:51AM   Printer-friendly
from the dystopia-is-now! dept.

https://arstechnica.com/information-technology/2024/09/dead-internet-theory-comes-to-life-with-new-ai-powered-social-media-app/

For the past few years, a conspiracy theory called "Dead Internet theory" has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it's bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It's available on the iPhone app store, but so far, it's picking up pointed criticism.

After its creator announced SocialAI as "a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make," computer security specialist Ian Coldwater quipped on X, "This sounds like actual hell." Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: "I don't mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell."
[...]
As The Verge reports in an excellent rundown of the example interactions, SocialAI lets users choose the types of AI followers they want, including categories like "supporters," "nerds," and "skeptics." These AI chatbots then respond to user posts with brief comments and reactions on almost any topic, including nonsensical "Lorem ipsum" text.

Sometimes the bots can be too helpful. On Bluesky, one user asked for instructions on how to make nitroglycerin out of common household chemicals and received several enthusiastic responses from bots detailing the steps, although several bots provided different recipes, none of which may be wholly accurate.
[...]
None of this would be possible without access to inexpensive LLMs like the kind that power ChatGPT. So far, SocialAI creator Sayman has said he is using a "custom mix" of AI models

[...] On Bluesky, evolutionary biologist and frequent AI commentator Carl T. Bergstrom wrote, "So I signed up for the new heaven-ban SocialAI social network where you're all alone in a world of bots. It is so much worse than I ever imagined. It's not GPT-level AI; it's more like ELIZA level, if the ELIZAs were lazily written stereotypes of every douchebag on ICQ circa 1999."
[...]
As a piece of prospective performance art, SocialAI may be genius. Or perhaps you could look at it as a form of social commentary on the vapidity of social media or about the harm of algorithmic filter bubbles that only feed you what you want to see and hear. But since its creator seems sincere, we're unsure how the service may fit into the future of social media apps.

For now, the app has already picked up a few positive reviews on the app store from people who seem to enjoy this taste of the hypothetical "dead Internet" by verbally jousting with the bots for entertainment: "5 stars and I've been using this for 10 minutes. I could argue with this AI for HOURS 😭 it's actually so much fun to see what it will say to the most random stuff 💀."


Original Submission