Thailand Creeps into Action with Some Swiss Effort
February 24, 2025
Hackers are intelligent bad actors who use their skills for evil. They do black hat hacking tricks for their own gains. The cyber criminal recently caught in a raid performed by three countries was definitely a huge scammer. Khaosod English reports on the takedown: “Thai-Swiss-US Operation Nets Hackers Behind 1,000+ Cyber Attacks.”
Four European hackers were arrested on the Thai island Phuket. They were charged with using ransomware to steal $16 million from over 1000 victims. The hackers were wanted by Swiss and US authorities.
Thai, Swiss, and US law enforcement officials teamed up in Operation Phobos Aetor to arrest the bad actors. They were arrested on February 10, 2025 in Phuket. The details are as follows:
“The suspects, two men and two women, were apprehended at Mono Soi Palai, Supalai Palm Spring, Supalai Vista Phuket, and Phyll Phuket x Phuketique Phyll. Police seized over 40 pieces of evidence, including mobile phones, laptops, and digital wallets. The suspects face charges of Conspiracy to Commit an Offense Against the United States and Conspiracy to Commit Wire Fraud.
The arrests stemmed from an urgent international cooperation request from Swiss authorities and the United States, involving Interpol warrants for the European suspects who had entered Thailand as part of a transnational criminal organization.”
The ransomware attacks accessed private networks to steal personal data and they also encrypted files. The hackers demanded cryptocurrency payments for decryption keys and threatened to publish data if the ransoms weren’t paid.
Let’s give a round of applause to putting these crooks behind bars! On to Myanmar and Lao PDR!
Whitney Grace, February 24, 2025
Tales of Silicon Valley Management Method: Perceived Cruelty
February 21, 2025
A dinobaby post. No smart software involved.
I read an interesting write up. Is it representative? A social media confection? A suggestion that one of the 21st centuries masters of the universe harbors a Vlad the Impaler behavior? I don’t know. But the article “Laid-Off Meta Employees Blast Zuckerberg for Running the Cruelest Tech Company Out There As Some Claim They Were Blindsided after Parental Leave” caught my attention. Note: This is a paywalled write up and you have to pay up.
Straight away I want to point out:
- AI does not have organic carbon based babies — at least not yet
- AI does not require health care — routine maintenance but the down time should be less than a year
- AI does not complain on social media about its gradient descents and Bayesian drift — hey, some do like the new “I remember” AI from Google.
Now back to the write up. I noted this passage:
Over on Blind, an anonymous app for verified employees often used in the tech space, employees are noting that an unseasonable chill has come over Silicon Valley. Besides allegations of the company misusing the low-performer label, some also claimed that Meta laid them off while they were taking approved leave.
Yep, a social media business story.
There are other tech giants in the story, but one is cited as a source of an anonymous post:
A Microsoft employee wrote on Blind that a friend from Meta was told to “find someone” to let go even though everyone was performing at or above expectations. “All of these layoffs this year are payback for 2021–2022,” they wrote. “Execs were terrified of the power workers had [at] that time and saw the offers and pay at that time [are] unsustainable. Best way to stop that is put the fear of god back in the workers.”
I think that a big time, mainstream business publication has found a new source of business news: Employee complaint forums.
In the 1970s I worked with a fellow who was a big time reporter for Fortune. He ended up at the blue chip consulting firm helping partners communicate. He communicated with me. He explained how he tracked down humans, interviewed them, and followed up with experts to crank out enjoyable fact-based feature stories. He seemed troubled that the approach at a big time consulting firm was different from that of a big time magazine in Manhattan. He had an attitude, and he liked spending months working on a business story.
I recall him because he liked explaining his process.
I am not sure the story about the cruel Zuckster would have been one that he would have written. What’s changed? I suppose I could answer the question if I prowled social media employee grousing sites. But we are working on a monograph about Telegram, and we are taking a different approach. I suppose my method is closer to what my former colleague did in his Fortune days reduced like a French sauce by the approach I learned at the blue chip consulting firm.
Maybe I should give social media research, anonymous sources, and something snappy like cruelty to enliven our work? Nah, probably not.
Stephen E Arnold, February 21, 2025
OpenAI Furthers Great Research
February 21, 2025
Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:
“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”
Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:
“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”
Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.
Cynthia Murrell, February 21, 2025
Gemini, the Couch Potato, Watches YouTube
February 21, 2025
Have you ever told yourself that you have too many YouTube videos to watch? Now you can save time by using Gemini AI to watch them for you. What is Gemini AI? According to Make Use Of, the algorithm can “Gemini Can Now Watch YouTube Videos And Save Hours Of Time.”
Google recently uploaded a new an update to its Gemini AI that allows users to catch up YouTube videos without having to actually watch them. The new feature is a marvelous advancement! The new addition to Gemini 2.0 Flash will watch the video then it can answer questions or provide a summary of it. Google users can access Gemini through the Gemini site or the smartphone app. It’s also available for free without the Gemini Advanced subscription.
To access video watching feature, users must select the 2.0 Flash Thinking Experimental with apps model from the sidebar.
Here’s how the cited article’s author used Gemini:
… I came across a YouTube video about eight travel tips for Las Vegas. Instead of watching the entire video, I simply asked Gemini, “What are the eight travel tips in this video?” Gemini then processed the video and provided a concise summary of the travel tips. I also had Gemini summarize a video on changing a windshield wiper on a Honda CR-V, a chore I needed to complete. The results were simple and easy to understand, allowing me to glance at my iPhone screen instead of constantly stopping and starting the video during the process. The easiest way to grab a YouTube link is through your web browser or the Share button under the video.”
YouTube videos can be long and boring. Gemini condenses the information into digestible and quick to read bits. It’s an awesome tool, but if Gemini watches a video does it count as a view for advertising? Will Gemini put on a few pounds snacking on Pringles?
Whitney Grace, February 21, 2025
Google and Personnel Vetting: Careless?
February 20, 2025
No smart software required. This dinobaby works the old fashioned way.
The Sundar & Prabhakar Comedy Show pulled another gag. This one did not delight audiences the way Prabhakar’s AI presentation did, nor does it outdo Google’s recent smart software gaffe. It is, however, a bit of a hoot for an outfit with money, smart people, and smart software.
I read the decidedly non-humorous news release from the Department of Justice titled “Superseding Indictment Charges Chinese National in Relation to Alleged Plan to Steal Proprietary AI Technology.” The write up states on February 4, 2025:
A federal grand jury returned a superseding indictment today charging Linwei Ding, also known as Leon Ding, 38, with seven counts of economic espionage and seven counts of theft of trade secrets in connection with an alleged plan to steal from Google LLC (Google) proprietary information related to AI technology. Ding was initially indicted in March 2024 on four counts of theft of trade secrets. The superseding indictment returned today describes seven categories of trade secrets stolen by Ding and charges Ding with seven counts of economic espionage and seven counts of theft of trade secrets.
Thanks, OpenAI, good enough.
Mr. Ding, obviously a Type A worker, appears to have quite industrious at the Google. He was not working for the online advertising giant; he was working for another entity. The DoJ news release describes his set up this way:
While Ding was employed by Google, he secretly affiliated himself with two People’s Republic of China (PRC)-based technology companies. Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC. By May 2023, Ding had founded his own technology company focused on AI and machine learning in the PRC and was acting as the company’s CEO.
What technology caught Mr. Ding’s eye? The write up reports:
Ding intended to benefit the PRC government by stealing trade secrets from Google. Ding allegedly stole technology relating to the hardware infrastructure and software platform that allows Google’s supercomputing data center to train and serve large AI models. The trade secrets contain detailed information about the architecture and functionality of Google’s Tensor Processing Unit (TPU) chips and systems and Google’s Graphics Processing Unit (GPU) systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of training and executing cutting-edge AI workloads. The trade secrets also pertain to Google’s custom-designed SmartNIC, a type of network interface card used to enhance Google’s GPU, high performance, and cloud networking products.
At least, Mr. Ding validated the importance of some of Google’s sprawling technical insights. That’s a plus I assume.
One of the more colorful items in the DoJ news release concerned “evidence.” The DoJ says:
As alleged, Ding circulated a PowerPoint presentation to employees of his technology company citing PRC national policies encouraging the development of the domestic AI industry. He also created a PowerPoint presentation containing an application to a PRC talent program based in Shanghai. The superseding indictment describes how PRC-sponsored talent programs incentivize individuals engaged in research and development outside the PRC to transmit that knowledge and research to the PRC in exchange for salaries, research funds, lab space, or other incentives. Ding’s application for the talent program stated that his company’s product “will help China to have computing power infrastructure capabilities that are on par with the international level.”
Mr. Ding did not use Google’s cloud-based presentation program. I found the explicit desire to “help China” interesting. One wonders how Google’s Googley interview process run by Googley people failed to notice any indicators of Mr. Ding’s loyalties? Googlers are very confident of their Googliness, which obviously tolerates an insider threat who conveys data to a nation state known to be adversarial in its view of the United States.
I am a dinobaby, and I find this type of employee insider threat at Google. Google bought Mandiant. Google has internal security tools. Google has a very proactive stance about its security capabilities. However, in this case, I wonder if a Googler ever noticed that Mr. Ding used PowerPoint, not the Google-approved presentation program. No true Googler would use PowerPoint, an archaic, third party program Microsoft bought eons ago and has managed to pump full of steroids for decades.
Yep, the tell — Googlers who use Microsoft products. Sundar & Prabhakar will probably integrate a short bit into their act in the near future.
Stephen E Arnold, February 20, 2025
A Super Track from You Know Who
February 20, 2025
Those CAPTCHA hoops we jump through are there to protect sites from bots, right? As Boing Boing reports, not so much. In fact, bots can now easily beat reCAPTCHA tests. Then why are we forced to navigate them to do anything online? So the software can track us, collect our data, and make parent company Google even richer. And data brokers. Writer Mark Frauenfelder cites a recent paper in, “reCAPTCHA: 819 Million Hours of Wasted Human Time and Billions of Dollars in Google Profits.” We learn:
“‘They essentially get access to any user interaction on that web page,’ says Dr. Andrew Searles, a former computer security researcher at UC Irvine. Searle’s paper, titled ‘Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2,’ found that Google’s widely-used CAPTCHA system is primarily a mechanism for tracking user behavior and collecting data while providing little actual security against bots. The study revealed that reCAPTCHA extensively monitors users’ cookies, browsing history, and browser environment (including canvas rendering, screen resolution, mouse movements, and user-agent data) — all of which can be used for advertising and tracking purposes. Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.”
That is quite a chunk of change. No wonder Google does not want to give up its CAPTCHA system—even if it no longer performs its original function. Why bother with matters of user inconvenience or even privacy when there are massive profits to be made?
Cynthia Murrell, February 20, 2025
What Do Gamers Know about AI? Nothing, Nothing at All
February 20, 2025
Take-Two Games CEO says, "There’s no such thing" as AI.
Is the head of a major gaming publisher using semantics to downplay the role of generative AI in his industry? PC Gamer reports, "Take-Two CEO Strauss Zelnick Takes a Moment to Remind Us Once Again that ‘There’s No Such Thing’ as Artificial Intelligence." Writer Andy Chalk quotes Strauss’ from a recent GamesIndustry interview:
"Artificial intelligence is an oxymoron, there’s no such thing. Machine learning, machines don’t learn. Those are convenient ways to explain to human beings what looks like magic. The bottom line is that these are digital tools and we’ve used digital tools forever. I have no doubt that what is considered AI today will help make our business more efficient and help us do better work, but it won’t reduce employment. To the contrary, the history of digital technology is that technology increases employment, increases productivity, increases GDP and I think that’s what’s going to happen with AI. I think the videogame business will probably be on the leading, if not bleeding, edge of using AI."
So AI, which does not exist, will actually create jobs instead of eliminate them? The write-up correctly notes the evidence points to the contrary. On the other hand, Strauss seems clear-eyed on the topic of copyright violations. AI-on-AI violations, anyway. We learn:
"That’s a mess Zelnick seems eager to avoid. ‘In terms of [AI] guardrails, if you mean not infringing on other people’s intellectual property by poaching their LLMs, yeah, we’re not going to do that,’ he said. ‘Moreover, if we did, we couldn’t protect that, we wouldn’t be able to protect our own IP. So of course, we’re mindful of what technology we use to make sure that it respects others’ intellectual property and allows us to protect our own.’"
Perhaps Strauss is on to something. It is true that generative AI is just another digital tool—albeit one that tends to put humans out of work. But as we know, hype is more important than reality for those chasing instant fame and riches.
Cynthia Murrell, February 20, 2025
Smart Software and Law Firms: Realities Collide
February 19, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
TechCrunch published “Legal Tech Startup Luminance, Backed by the Late Mike Lynch, Raises $75 Million.” Good news for Luminance. Now the company just needs to ring the bell for those putting up the money. The write up says:
Claiming to be capable of highly accurate interrogation of legal issues and contracts, Luminance has raised $75 million in a Series C funding round led by Point72 Private Investments. The round is notable because it’s one of the largest capital raises by a pure-play legal AI company in the U.K. and Europe. The company says it has raised over $115 million in the last 12 months, and $165 million in total. Luminance was originally developed by Cambridge-based academics Adam Guthrie (founder and chief technical architect) and Dr. Graham Sills (founder and director of AI).
Why is Luminance different? The method is similar to that used by Deepseek. With concerns about the cost of AI, a method which might be less expensive to get up and keep running seems like a good bet.
However, Eudia has raised $105 million with backing from people familiar with Relativity’s legal business. Law dot com suggests that Eudia will streamline legal business processes.
The article “Massive Law Firm Gets Caught Hallucinating Cases” offers an interesting anecdote about a large law firm’s facing sanctions. What did the big boys and girls at the law firm do? Those hard working Type A professionals cited nine cases to support an argument. There is just one trivial issue perplexing the senior partners. Eight of those cases were “nonexistent.” That means made up, invented, and spot out by a nifty black box of probabilities and their methods.
I am no lawyer. I did work as an expert witness and picked up some insight about the thought processes of big time lawyers. My observations may not apply to the esteemed organizations to which I linked in this short essay, but I will assume that I am close enough for horseshoes.
- Partners want big pay and juicy bonuses. If AI can help reduce costs and add protein powder to the compensation package, AI is definitely a go-to technology to use.
- Lawyers who are very busy all of the billable time and then some want to be more efficient. The hyperbole swirling around AI makes it clear that using an AI is a productivity booster. Do lawyers have time to check what the AI system did? Nope. Therefore, hallucination is going to be part of the transformer-based methodologies until something better becomes feasible. (Did someone say, “Quantum computers?)
- The marketers (both directly compensated and the social media remoras) identify a positive. Then that upside is gilded like Tzar Nicholas’ powder room and repeated until it sure seems true.
The reality for the investors is that AI could be a winner. Go for it. The reality is for the lawyers that time to figure out what’s in bounds and what’s out of bounds is unlikely to be available. Other professionals will discover what the cancer docs did when using the late, great IBM Watson. AI can do some things reasonably well. Other things can have severe consequences.
Stephen E Arnold, February 19, 2025
Now I Get It: Duct Tape Jobs Are the Problem
February 19, 2025
A dinobaby post. No smart software involved.
“Is Ops a Bullsh&t Job?” appears to address the odd world of fix it people who work on systems of one sort of anther. The focus in the write up is on software, but I think the essay reveals broader insight into work today. First, let’s look at a couple of statements in this very good essay, and, second, turn our attention briefly to the non-software programming sector.
I noted this passage attributed to an entity allegedly named Pablo:
Basically, we have two kinds of jobs. One kind involves working on core technologies, solving hard and challenging problems, etc. The other one is taking a bunch of core technologies and applying some duct tape to make them work together. The former is generally seen as useful. The latter is often seen as less useful or even useless, but, in any case, much less gratifying than the first kind. The feeling is probably based on the observation that if core technologies were done properly, there would be little or no need for duct tape.
The distinction strikes me as important. The really good programmers work on the “core” part of a system. A number of companies embrace this stratification of the allegedly most talented programmers and developers. This is a spin on what my seventh grade teacher called a “caste system.” I do remember thinking, “It is very important to get to the top of the pyramid; otherwise, life will be a chore.”
Another passage warranted a blue circle:
A “duct taper” is a role that only exists to solve a problem that ought not to exist in the first place.
The essay then provides some examples. Here are three from the essay:
-
- “My job was to transfer information about the state’s oil wells into a different set of notebooks than they were currently in.”
- “My day consisted of photocopying veterans’ health records for seven and a half hours a day. Workers were told time and again that it was too costly to buy the machines for digitizing.”
- “I was given one responsibility: watching an in-box that received emails in a certain form from employees in the company asking for tech help, and copy and paste it into a different form.”
Good stuff.
With that as background, here’s what I think the essay suggests.
The reason so many gratuitous changes, lousy basic services, and fights at youth baseball games are evident is a lack of meaningful work. Undertaking a project which a person and everyone else around the individual knows is meaningless, creates a persistent sense of unease.
How is this internal agitation manifested? Let me identify several from my experiences this week. None is directly “technical” but lurking in the background is the application of information to a function. When that information is distorted by the duct tape wrapped around a sensitive area, these are what happens in real life.
First, I had to get a tax sticker for my license plate. The number of people at the state agency was limited. More people entered than left. The cycle time for a sticker issuing professional was about 75 minutes. When I reached the desk of the SIP I presented my documents. I learned that my proof of insurance was a one page summary of the policy I had on my auto. I learned, “We can only accept insurance cards. This is a sheet of paper, not a card. You come back when you have the card. Next.” Nifty. Duct tape wrapped around a procedure that required a policy number and the name of the insurance provider.
Second, I bought three plastic wrapped packages of bottled water. I picked up a quart of milk. I put a package of raisins in my basket. I went through the self check out because no humans worked at the check out jobs at the time I visited. I scanned my items and placed them on the “Put purchases here” area. I inserted my credit card and the system honked and displayed, “Stay here a manager is coming.” Okay, I stayed there and noted that the other three self check outs had similar messages and honks coming from those self check out systems. I watched as a harried young person tried to determine if each of the four customers had stolen items. The fix he implemented was to have the four of us rescan the items. My system honked. My milk was not in the store’s system as a valid product. He asked me to step aside, and he entered the product number manually. Success for him. Utter failure for the grocery store.
Third, I picked up two shirts from the cleaners. I like my shirts with heavy starch. The two shirts had no starch. The young person had no idea what to do. I said, “Send the shirts through the process again and have your colleagues dip them in starch. The young worker told me, “We can’t do that. You have to pay the bill and then I will create a new work order.” Sorry. I paid the bill and went to another company’s store.
I am not sure these are duct tape jobs. If I needed money, I would certainly do the work and try to do my best. The message in the essay is that there are duct tape jobs. I disagree. The worker sees the job as beneath him or her and does not put physical, emotional, or intellectual effort in providing value to the employer or the customer.
Instead we get silly interface changes in Windows. We get truly stupid explanations about why a policy number cannot be entered from a sheet of paper, not a “card.” We get non-functioning check out systems and employees who don’t say, “Come to the register. I will get these processed and you out of here as fast as I can.”
Duct tape in the essay is about software. I think duct tape is a mind set issue. Use duct tape to make something better.
Stephen E Arnold, February 19, 2025
Speed Up Your Loss of Critical Thinking. Use AI
February 19, 2025
While the human brain isn’t a muscle, its neurology does need to be exercised to maintain plasticity. When a human brain is rigid, it’s can’t function in a healthy manner. AI is harming brains by making them not think good says 404 Media: “Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared.” You can read the complete Microsoft research report at this link. (My hunch is that this type of document would have gone the way of Timnit Gebru and the flying stochastic parrot, but that’s just my opinion, Hank, Advait, Lev, Ian, Sean, Dick, and Nick.)
Carnegie Mellon University and Microsoft researchers released a paper that says the more humans rely on generative AI the “result in the deterioration of cognitive faculties that ought to be preserved.”
Really? You don’t say! What else does this remind you of? How about watching too much television or playing too many videogames? These passive activities (arguable with videogames) stunt the development of brain gray matter and in a flight of Mary Shelley rhetoric make a brain rot! What else did the researchers discover when they studied 319 knowledge workers who self-reported their experiences with generative AI:
“ ‘The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,’ the researchers wrote. ‘Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.’”
By the way, we definitely love and absolutely believe data based on self reporting. Think of the mothers who asked their teens, “Where did you go?” The response, “Out.” The mothers ask, “What did you do?” The answer, “Nothing.” Yep, self reporting.
Does this mean generative AI is a bad thing? Yes and no. It’ll stunt the growth of some parts of the brain, but other parts will grow in tandem with the use of new technology. Humans adapt to their environments. As AI becomes more ingrained into society it will change the way humans think but will only make them sort of dumber [sic]. The paper adds:
“ ‘GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,’ the researchers wrote. ‘The tool could help develop specific critical thinking skills, such as analyzing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.’”
The key is to not become overly reliant AI but also be aware that the tool won’t go away. Oh, when my mother asked me, “What did you do, Whitney?” I responded in the best self reporting manner, “Nothing, mom, nothing at all.”
Whitney Grace, February 19, 2025