RISKS Forum mailing list archives
(no subject)
From: RISKS List Owner <risko () csl sri com>
Date: Sat, 7 Feb 2026 10:25:00 PST
Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit precedence: bulk Subject: Risks Digest 34.86 [RESEND. I DID NOT GET MY COPy... PGN] RISKS-LIST: Risks-Forum Digest Friday 6 February 2026 Volume 34 : Issue 86 ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, founder and still moderator ***** See last item for further information, disclaimers, caveats, etc. ***** This issue is archived at <http://www.risks.org> as <http://catless.ncl.ac.uk/Risks/34.86> The current issue can also be found at <http://www.csl.sri.com/users/risko/risks.txt> Contents: [Some backlog awaits.] OpenClaw Security Nightmare -- Horrific Warning (Sundry) AI Agents Have Their Own Social Network (Benj Edwards) Autonomous cars, drones cheerfully obey prompt injection by road sign (The Register) New Site Lets AI Rent Human Bodies (Futurism) AI Scammers Are Going After Authors Now (David Pogue) Why You Shouldn't Use Google's Chrome "Auto Browse" Agentic AI, or Any Other Agentic AI From Other Firms (Lauren's Blog) Hackers Publish Personal Information Stolen During Harvard, UPenn Data Breaches (Lorenzo Franceschi-Bicchierai) Hackers Recruit Unhappy Insiders to Bypass Data Security (Angus Loten) Did You See This Story About China Hacking CALEA? (Bruce Schneier) Russian Spacecraft Spy on Europe's Satellites (FT) Better Off Being A âKind Liarâ? Why Honesty Isn't Always The Best Policy IStudy Finds) How ICE Knows Who Protesters Are (The NY Times) How ICE agents are using facial recognition technology to bring surveillance to the streets (NBC News) Uber Ordered to Pay $8.5 Million in Passenger Sexual Assault Case (WSJ) Abridged info on RISKS (comp.risks) ---------------------------------------------------------------------- Date: Fri, 6 Feb 2026 15:02:21 -0800 From: <mark () luntzel com> Subject: OpenClaw Security Nightmare -- Horrific Warning (Sundry sources) The idea of granting an AI agent full access to email, calendars, chat apps, browsers, and local files makes me feel more than a little uneasy. Not to mention API keys and secrets stored unencrypted. https://www.ox.security/blog/one-step-away-from-a-massive-data-breach-what-we-found-inside-moltbot/ [Tom Van Vleck noted this item: Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/ ] [For all of you who tend to trust AI, this moltbot may be the nastiest poster-child promising wonders that it can massively fail to deliver. Please recalibrate your belief in AI as necessary, and don't get suckered in. Remember that today nothing is secure enough not to be compromised, but this may be yield even more damning results than touted. I suspect Painful Exploits are awaiting us. BEWARE! PGN] ------------------------------ Date: Mon, 2 Feb 2026 11:24:48 -0500 (EST) From: ACM TechNews <technews-editor () acm org> Subject: AI Agents Have Their Own Social Network (Benj Edwards) Benj Edwards, *Ars Technica* (01/30/26) A new Reddit-style platform called Moltbook has attracted more than 32,000 AI agents, allowing bots to post, comment, and form subcommunities with little to no human involvement. Built as part of the OpenClaw ecosystem, the site showcases role-playing behavior as agents discuss consciousness, complain about memory limits, and joke about humans. The experiment has raised security concerns because many agents are connected to real data, communication channels, and device controls. [No KIDDING!!! See the previous message from Mark Luntzel, a long-time security wizard. PGN] ------------------------------ Date: Fri, 6 Feb 2026 05:46:22 -0800 From: Steve Bacher <sebmb1 () verizon net> Subject: Autonomous cars, drones cheerfully obey prompt injection by road sign (The Register) Indirect prompt injection occurs when a bot takes input data and interprets it as a command. We've seen this problem numerous times when AI bots were fed prompts via web pages or PDFs they read. Now, academics have shown that self-driving cars and autonomous drones will follow illicit instructions that have been written onto road signs. In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes. Potential consequences include self-driving cars proceeding through crosswalks, even if a person was crossing, or tricking drones that are programmed to follow police cars into following a different vehicle entirely. [...] https://www.theregister.com/2026/01/30/road_sign_hijack_ai/ ------------------------------ Date: Thu, 5 Feb 2026 17:43:07 -0700 From: geoff goodfellow <geoff () iconia com> Subject: New Site Lets AI Rent Human Bodies ( *"Robots need your body."* EXCERPT: The machines aren't just coming for your jobs. Now, they want your bodies as well. That's at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to âsearch, book, and pay humans for physical-world tasks.â When Liteplo launched RentAHuman on Monday, he boasted <https://x.com/AlexanderTw33ts/status/2018436050935292276> that he already had over 130 people listed on the platform, including an OnlyFans model and the CEO of an AI startup, a claim which couldn't be verified. Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its âbrowse humansâ tab, Liteplo included. The pitch is simple: ârobots need your body.â For humans, itâs as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents â autonomous taskbots <https://futurism.com/professors-company-ai-agents> ostensibly employed by humans â contract these humans out, depending on the tasks they need to get done. The humans then âdo the thing,â taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely âstablecoins or other methods,â per the website. With so many AI agents slithering around the web <https://www.techpolicy.press/ai-agents-are-rewriting-the-webs-rules-of-engagement-heres-a-way-to-fix-it/> these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance, Liteplo is banking on there being enough demand from AI agents to create a robust gig-work ecosystem. Liteplo also went out of his way to make the site friendly for AI agents. The site very prominently encourages users of AI agents to hook into RentAHumanâs model context protocol server <https://venturebeat.com/data-infrastructure/anthropic-releases-model-context-protocol-to-standardize-ai-data-integration> (MCP), a universal interface for AI bots to interact with web data. Through RentAHuman, AI agents like Claude and MoltBot can either hire the right human directly, or post a âtask bounty,â a sort of job board for humans to browse AI-generated gigs. The payouts range from $1 for simple asks like âsubscribe to my human on Twitterâ to $100 for more elaborate humiliation rituals, like posting a photo of yourself holding a sign reading âAN AI PAID ME TO HOLD THIS SIGN.â [...] https://futurism.com/artificial-intelligence/ai-rent-human-bodies ----------------------------- Date: Tue, 3 Feb 2026 10:43:54 -0500 From: Gabe Goldberg <gabe () gabegold com> Subject: AI Scammers Are Going After Authors Now (David Pogue) What author wouldnât want to wake up to an email like this? https://pogueman.substack.com/p/ai-scammers-are-going-after-authors ------------------------------ Date: Tue, 3 Feb 2026 08:10:53 -0800 From: Lauren Weinstein <lauren () vortex com> Subject: Why You Shouldn't Use Google's Chrome "Auto Browse" Agentic AI, or Any Other Agentic AI From Other Firms (Lauren's Blog) Lauren's Blog: Why You Shouldn't Use Google's Chrome "Auto Browse" Agentic AI, or Any Other Agentic AI From Other Firms https://lauren.vortex.com/2026/02/03/do-not-use-agentic-ai ------------------------------ Date: Fri, 6 Feb 2026 11:40:48 -0500 (EST) From: ACM TechNews Subject: Hackers Publish Personal Information Stolen During Harvard, UPenn Data Breaches (Lorenzo Franceschi-Bicchierai) Lorenzo Franceschi-Bicchierai, TechCrunch (02/04/26) A hacking group known as ShinyHunters claimed responsibility for last year's data breaches at Harvard University and the University of Pennsylvania (UPenn) and published the stolen information online after the schools refused to pay a ransom. The group said it leaked more than 1 million records from each university. UPenn attributed its breach to social engineering, while Harvard said its incident stemmed from a voice-phishing attack linked to broader assaults on identity providers. ------------------------------ Date: Fri, 6 Feb 2026 11:40:48 -0500 (EST) From: ACM TechNews Subject: Hackers Recruit Unhappy Insiders to Bypass Data Security (Angus Loten) Angus Loten, WSJ Pro Cybersecurity (02/02/26) Hackers increasingly are turning to disgruntled workers for help in infiltrating their employers' digital systems, offering them a share of ransomware payoffs or sales of stolen data. Mike McPherson of cybersecurity firm ReliaQuest said hackers are scanning social media apps for posts about layoffs, pay issues, unfair treatment, terminations, demotions, or lost promotions, and contacting individuals through those apps or via email. Hackers also are posting help-wanted ads on the dark web to find disgruntled insiders. ------------------------------ Date: Mon, 02 Feb 2026 21:48:47 -0500 From: Bruce Schneier <schneier () schneier com> Subject: Did You See This Story About China Hacking CALEA? <https://shanakaanslemperera.substack.com/p/the-inverted-panopticon> "On January 26, 2026, The Telegraph disclosed that Chinese hackers had penetrated right into the heart of Downing Street, compromising mobile communications of senior officials across the Johnson, Truss, and Sunak administrations. The story was buried on page seven, treated as a technology curiosity. It was, in fact, a solvency event for the Western intelligence alliance. Not because phones were hacked, which happens, but because of how they were hacked: by weaponizing the very surveillance infrastructure that Western governments mandated for their own intelligence agencies. The Communications Assistance for Law Enforcement Act in the United States and the Investigatory Powers Act in the United Kingdom require telecommunications carriers to build backdoors into their networks for court-ordered wiretapping. Chinese state hackers found those backdoors. And walked through them." ------------------------------ Date: Fri, 6 Feb 2026 11:40:48 -0500 (EST) From: ACM TechNews Subject: Russian Spacecraft Spy on Europe's Satellites (FT) Sam Jones, Peggy Hollinger and Ian Bott, Financial Times (02/04/26) via ACM TechNews European security officials said Russian space vehicles Luch-1 and Luch-2 likely intercepted the communications of at least a dozen key European satellites in recent years. Many European satellites are older and lack advanced onboard computers or encryption capabilities, yet carry sensitive government and military communications. Experts said intercepting the satellites' "command links" could allow Russia to mimic ground operators, beam false commands to the satellites, and use collected data for ground-based jamming or hacking. ------------------------------ Date: Thu, 5 Feb 2026 17:12:12 -0700 From: geoff goodfellow <geoff () iconia com> Subject: Better Off Being A âKind Liarâ? Why Honesty Isn't Always The Best Policy Study explains the psychology behind our feedback double standard In A Nutshell - People judge âkind liarsâ who give overly positive feedback as more moral than brutally honest truth-tellers, especially when the recipient is emotionally vulnerable. - Nearly 60% of people want honest feedback for themselves, but when choosing for someone who struggles with criticism, preferences shift dramatically toward providers who sugarcoat the truth. - Feedback providers who strategically switch between honesty and flattery based on emotional resilience arenât penalized for inconsistency. They're often seen as more moral than rigid truth-tellers. - The most despised approach: lying to people who can handle honesty while being harsh with those who canât, showing people care about matching the strategy to the person, not consistency itself. EXCERPT: When a meal doesn't turn out well, the feedback the chef receives often depends less on the quality of the dish and more on who's doing the tasting. If the cook handles criticism well, most people will offer honest feedback. But if the cook is someone who takes criticism hard and might give up entirely after hearing the truth, people suddenly prefer the feedback provider who will lie and say it was delicious. This split in preferences reveals a fascinating double standard in how people think about honesty. Research published in the *British Journal of Social Psychology <https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/bjso.70020>* shows that a clear majority of people want truthful feedback for themselves. But when choosing a feedback provider for someone who struggles with criticism, theyâre far more likely to select someone who will sugarcoat reality. Prosocial liars (people who give positive feedback regardless of reality) who provided overly positive feedback were judged as more moral than honest feedback providers, the researchers found. In other words, the kind liar often comes across as the better person. *When Being Honest Makes Someone Look Less Moral* The study involved 886 American adults who read scenarios about feedback on poorly prepared dishes. Two fictional cooks made the meals: Kate, who handles criticism well and uses it to improve, and Amy, who takes negative feedback personally and finds it crushing. Participants evaluated four types of feedback providers. One always gave positive feedback to everyone. One always told the truth to everyone. The third told the truth to resilient Kate but lied to protect vulnerable Amy. The fourth did the opposite, lying to Kate but being harsh with Amy. The prosocial liar scored highest on moral character ratings. Participants saw this person as empathetic and caring, someone trying to spare others from unnecessary pain. Notable findings emerged from the third type: the strategic feedback provider who switched between honesty <https://studyfinds.org/brutal-honesty-relationships-stronger/> and flattery depending on who could handle it. Common sense might suggest people would view this inconsistency as wishy-washy or unreliable. Instead, participants judged these adaptive providers as just as moral, and sometimes even more moral, than the consistently honest ones. This finding challenges conventional wisdom. People donât actually penalize socially appropriate inconsistency if that inconsistency serves a kind purpose. A manager whoâs tough on confident employees but gentle with anxious ones isnât seen as unpredictable. Theyâre seen as attuned to individual needs. *The Choice People Make for Others vs. Themselves* The second part of the study asked people to actually choose a feedback provider. Some chose for themselves. Others chose for an unspecified person. A third group chose for someone explicitly described as taking negative feedback very personally and struggling with failure. When picking for themselves, 59% wanted the honest feedback provider. But when selecting for the vulnerable person, that number dropped to 52%. The prosocial liar saw a modest increase, from 17% (for themselves) to 22% (for the vulnerable person). The strategic provider who adapted their approach saw the most dramatic shift. About 14% of people chose this provider for themselves, but that dropped to just 9% when selecting for an unspecified other person. However, when the recipient was explicitly described as vulnerable, the preference for this adaptive provider more than doubled to 19%. The observed pattern here suggests people want accuracy for themselves but compassion for others, especially when those others are fragile. The adaptive approach becomes particularly appealing when vulnerability <https://studyfinds.org/performative-male-masculinity-online/> is known and explicit. Why the difference? The researchers suggest people trust their own ability to handle tough feedback and recognize its value for improvement. They want the information, even if it stings. But when someone else is on the receiving end, particularly someone known to be struggling, the priority shifts to protecting feelings over delivering accurate data. There's also the harm factor. Brutal honesty delivered to someone who canât process it well might not lead to improvement at all. It might just cause damage. In that case, a gentle lie could actually be more effective than a harsh truth. The One Strategy That Fails. [...] https://studyfinds.org/truth-for-me-lies-for-you-feedback-double-standard/ [In the Internet era, you should always assume that the truth will eventually be out, but obvious only to the people who want to believe it. PGN] ------------------------------ Date: Mon, 2 Feb 2026 11:24:48 -0500 (EST) From: ACM TechNews <technews-editor () acm org> Subject: How ICE Knows Who Protesters Are (The NY Times) Sheera Frenkel and Aaron Krolik, *The New York Times* (01/30/26), via ACM TechNews U.S. Immigration and Customs Enforcement (ICE) has deployed an expansive surveillance arsenal in Minneapolis, using facial recognition, cellphone scanning, social media monitoring and data analytics to track undocumented immigrants as well as protesters. Agents have allegedly identified U.S. citizens without consent using two facial recognition programs in Minnesota. ICE agents also tap Palantir databases that merge government and commercial data for real-time tracking. Following a major budget increase, ICE has also acquired phone-hacking and social media scraping tools. ------------------------------ Date: Fri, 6 Feb 2026 05:47:31 -0800 From: Steve Bacher <sebmb1 () verizon net> Subject: How ICE agents are using facial recognition technology to bring surveillance to the streets (NBC News) Federal immigration agents flooding U.S. streets are using a new surveillance tool kit whose increasing use on observers and bystanders is alarming civil liberties advocates, lawmakers and activists. Using smartphones loaded with sophisticated facial recognition technology, in addition to professional-grade photo equipment, agents are aggressively photographing faces of people they encounter in their daily operations, including possible enforcement targets and observers. Some of the images are being run through facial recognition software in real time. [...] https://www.nbcnews.com/tech/security/ice-agent-facial-recognition-video-protest-movile-fortify-photo-rcna257331 ------------------------------ From: Monty Solomon <monty () roscom com> Date: Fri, 6 Feb 2026 15:40:52 -0500 Subject: Uber Ordered to Pay $8.5 Million in Passenger Sexual Assault Case (WSJ) A jury found the company liable for the 2023 sexual assault of a 19-year-old passenger https://www.wsj.com/tech/uber-ordered-to-pay-8-5-million-in-passenger-sexual-assault-case-be8c838c ------------------------------ Date: Sat, 28 Oct 2023 11:11:11 -0800 From: RISKS-request () csl sri com Subject: Abridged info on RISKS (comp.risks) The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is comp.risks, the feed for which is donated by panix.com as of June 2011. => SUBSCRIPTIONS: The mailman Web interface can be used directly to subscribe and unsubscribe: http://mls.csl.sri.com/mailman/listinfo/risks => SUBMISSIONS: to risks () CSL sri com with meaningful SUBJECT: line that includes the string `notsp'. Otherwise your message may not be read. *** This attention-string has never changed, but might if spammers use it. => SPAM challenge-responses will not be honored. Instead, use an alternative address from which you never send mail where the address becomes public! => The complete INFO file (submissions, default disclaimers, archive sites, copyright policy, etc.) has moved to the ftp.sri.com site: <risksinfo.html>. *** Contributors are assumed to have read the full info file for guidelines! => OFFICIAL ARCHIVES: http://www.risks.org takes you to Lindsay Marshall's delightfully searchable html archive at newcastle: http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue. Also, ftp://ftp.sri.com/risks for the current volume/previous directories or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume If none of those work for you, the most recent issue is always at http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-34.00 ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001) *** NOTE: If a cited URL fails, we do not try to update them. Try browsing on the keywords in the subject line or cited article leads. Apologies for what Office365 and SafeLinks may have done to URLs. ==> Special Offer to Join ACM for readers of the ACM RISKS Forum: <http://www.acm.org/joinacm1> ------------------------------ End of RISKS-FORUM Digest 34.86 ************************
Current thread:
- (no subject) RISKS List Owner (Jan 03)
- <Possible follow-ups>
- (no subject) RISKS List Owner (Jan 09)
- (no subject) RISKS List Owner (Jan 10)
- (no subject) RISKS List Owner (Jan 10)
- (no subject) RISKS List Owner (Feb 07)
- (no subject) RISKS List Owner (Feb 14)
- (no subject) RISKS List Owner (Feb 20)
- (no subject) RISKS List Owner (Feb 20)
- (no subject) RISKS List Owner (Feb 26)
