Sakana: Can Its Smart Software Replace Scientists and Grant Writers?

August 13, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of years ago, merging large language models seemed like a logical way to “level up” in the artificial intelligence game. The notion of intelligence aggregation implied that if competitor A was dumb enough to release models and other digital goodies as open source, an outfit in the proprietary software business could squish the other outfits’ LLMs into the proprietary system. The costs of building one’s own super-model could be reduced to some extent.

Merging is a very popular way to whip up pharmaceuticals. Take a little of this and a little of that and bingo one has a new drug to flog through the approval process. Another example is taking five top consultants from Blue Chip Company I and five top consultants from Blue Chip Company II and creating a smarter, higher knowledge value Blue Chip Company III. Easy.

A couple of Xooglers (former Google wizards) are promoting a firm called Sakana.ai. The purpose of the firm is to allow smart software (based on merging multiple large language models and proprietary systems and methods) to conduct and write up research (I am reluctant to use the word “original”, but I am a skeptical dinobaby.) The company says:

One of the grand challenges of artificial intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used to aid human scientists, e.g. for brainstorming ideas or writing code, they still require extensive manual supervision or are heavily constrained to a specific task. Today, we’re excited to introduce The AI Scientist, the first comprehensive system for fully automatic scientific discovery, enabling Foundation Models such as Large Language Models (LLMs) to perform research independently. In collaboration with the Foerster Lab for AI Research at the University of Oxford and Jeff Clune and Cong Lu at the University of British Columbia, we’re excited to release our new paper, The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery.

Sakana does not want to merge the “big” models. Its approach for robot generated research is to combine specialized models. Examples which came to my mind were drug discovery and providing “good enough” blue chip consulting outputs. These are both expensive businesses to operate. Imagine the payoff if the Sakana approach delivers high value results. Instead of merging big, the company wants to merge small; that is, more specialized models and data. The idea is that specialized data may sidestep some of the interesting issues facing Google, Meta, and OpenAI among others.

image

Sakana’s Web site provides this schematic to help the visitor get a sense of the mechanics of the smart software. The diagram is Sakana’s, not mine.

I don’t want to let science fiction get in the way of what today’s AI systems can do in a reliable manner. I want to make some observations about smart software making discoveries and writing useful original research papers or for BearBlog.dev.

  • The company’s Web site includes a link to a paper written by the smart software. With a sample of one, I cannot see much difference between it and the baloney cranked out by the Harvard medical group or Stanford’s former president. If software did the work, it is a good deep fake.
  • Should the software be able to assemble known items of information into something “novel,” the company has hit a home run in the AI ballgame. I am not a betting dinobaby. You make your own guess about the firm’s likelihood of success.
  • If the software works to some degree, quite a few outfits looking for a way to replace people with a Sakana licensing fee will sign up. Will these outfits renew? I have no idea. But “good enough” may be just what these companies want.

Net net: The Sakana.ai Web site includes a how it works, more papers about items “discovered” by the software, and a couple of engineers-do-philosophy-and-ethics write ups. A “full scientific report” is available at https://arxiv.org/abs/2408.06292. I wonder if the software invented itself, wrote the documents, and did the marketing which caught my attention. Maybe?

Stephen E Arnold, August 13, 2024

The Upside of the Google Olympics Ad

August 13, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I learned that Google’s AI advertisements “feel bad for a reason.” And what is that reason? The write up “Those Olympics AI Ads Feel Bad for a Reason. It’s Not Just Google’s ‘Dear Sydney’ Commercial That Feels Soulless and Strange.” (I want to mention that this headline seems soulless and strange, but I won’t.”)

The write up reveals the “secret” of the Googler using Google AI to write his Google progeny:

The latest spate of AI ad campaigns, for their part, have thus far failed to highlight how its products assist what the majority of Americans actually want to use AI for — namely, help with household chores — and instead end up showing how AI will be used for the things that most of us don’t want it to interfere with: our job prospects, our privacy, and experiences and skills that feel uniquely human. If the world already thinks of AI as menacing, wasteful, and yet another example of market overhype, these ads are only confirming our worst fears. No wonder they come off as so thoroughly insufferable.

I apologize for skipping the somewhat ho hum recitation of AI marketing gaffes. I bravely waded through the essay to identify the reason that AI ads make people “feel bad.” Am I convinced?

Nope.

I watched a version of the ad on my laptop. Based on my experience, I thought it was notable that the alleged Googley user remembered he had a family. I was impressed that the Googley father remembered where his Googley child was. I liked the idea of using AI to eliminate the need to use a smart software system to help craft a message with words that connoted interest, caring, familial warmth.

Let’s face it. The ad was more satisfying that converting a news story like a dead Google VP in a yacht.

image

How would Google’s smart software tell this story? I decided to find out. Here is what Gemini 1.5 Pro provided to me. Remember. I am a nerd dinobaby with a reputation for lacking empathy and noted for my work in certain specialized sectors:

It’s been a long time since Dean’s passing, but I wanted to reach out because I was thinking about him and his family. I came across an article about the woman who was with him when he passed. I know this might be a difficult thing to hear about, and I am so very sorry for your loss. Dean was such a bright light in this world, and I know how much he meant to you. Thinking of you during this time.

Amazing. The Google’s drug death in the presence of a prostitute has been converted to a paragraph I could not possibly write. I would use a phrase like “nuked by horse” instead of “passed.” The phrase “I am so very sorry” is not what I would have been able to craft. My instinct is to say something like “The Googler tried to have fun and screwed up big time.” Finally, never would a nerd dinobaby like me write “thinking of you.” I would write, “Get to your attorney pronto.”

I know that real Googlers are not like nerd dinobabies. Therefore, it is perfectly understandable that the ad presents a version of reality which is not aspirational. It is a way for certain types of professionals to simulate interest and norm-core values.

Let’s praise Google and its AI.

Stephen E Arnold, August 13, 2024

Takedown Notices May Slightly Boost Sales of Content

August 13, 2024

It looks like take-down notices might help sales of legitimate books. A little bit. TorrentFreak shares the findings from a study by the University of Warsaw, Poland, in, “Taking Pirated Copies Offline Can Benefit Book Sales, Research Finds.” Writer Ernesto Van der Sar explains:

“This year alone, Google has processed hundreds of millions of takedown requests on behalf of publishers, at a frequency we have never seen before. The same publishers also target the pirate sites and their hosting providers directly, hoping to achieve results. Thus far, little is known about the effectiveness of these measures. In theory, takedowns are supposed to lead to limited availability of pirate sources and a subsequent increase in legitimate sales. But does it really work that way? To find out more, researchers from the University of Warsaw, Poland, set up a field experiment. They reached out to several major publishers and partnered with an anti-piracy outfit, to test whether takedown efforts have a measurable effect on legitimate book sales.”

See the write-up for the team’s methodology. There is a caveat: The study included only print books, because Poland’s e-book market is too small to be statistically reliable. This is an important detail, since digital e-books are a more direct swap for pirated copies found online. Even so, the researchers found takedown notices produced a slight bump in print-book sales. Research assistants confirmed they could find fewer pirated copies, and the ones they did find were harder to unearth. The write-up notes more research is needed before any conclusions can be drawn.

How hard will publishers tug at this thread? By this logic, if one closes libraries that will help book sales, too. Eliminating review copies may cause some sales. Why not publish books and keep them secret until Amazon provides a link? So many money-grubbing possibilities, and all it would cost is an educated public.

Cynthia Murrell, August 13, 2024

Some Fun with Synthetic Data: Includes a T Shirt

August 12, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Academics and researchers often produce bogus results, fiddle images (remember the former president of Stanford University), or just make up stuff. Despite my misgivings, I want to highlight what appear to be semi-interesting assertions about synthetic data. For those not following the nuances of using real data, doing some mathematical cartwheels, and producing made-up data which are just as good as “real” data, synthetic data for me is associated with Dr. Chris Ré, the Stanford Artificial Intelligence Laboratory (remember the ex president of Stanford U., please). The term or code word for this approach to information suitable for training smart software is Snorkel. Snorkel became as company. Google embraced Snorkel. The looming litigation and big dollar settlements may make synthetic data a semi big thing in a tech dust devil called artificial intelligence. The T Shirt should read, “Synthetic data are write” like this:

image

I asked an AI system provided by the global leaders in computer security (yep, that’s Microsoft) to produce a T shirt for a synthetic data team. Great work and clever spelling to boot.

The “research” report appeared in Live Science. “AI Models Trained on Synthetic Data Could Break Down and Regurgitate Unintelligible Nonsense, Scientists Warn” asserts:

If left unchecked,”model collapse” could make AI systems less useful, and fill the internet with incomprehensible babble.

The unchecked term is a nice way of saying that synthetic data are cheap and less likely to become a target for copyright cops.

The article continues:

AI models such as GPT-4, which powers ChatGPT, or Claude 3 Opus rely on the many trillions of words shared online to get smarter, but as they gradually colonize the internet with their own output they may create self-damaging feedback loops. The end result, called “model collapse” by a team of researchers that investigated the phenomenon, could leave the internet filled with unintelligible gibberish if left unchecked.

image

People who think alike and create synthetic data will prove that “fake” data are as good as or better than “real” data. Why would anyone doubt such glib, well-educated people. Not me! Thanks, MSFT Copilot. Have you noticed similar outputs from your multitudinous AI systems?

In my opinion, the Internet when compared to commercial databases produced with actual editorial policies has been filled with “unintelligible gibberish” from the days I showed up at conferences to lecture about how hypertext was different from Gopher and Archie. When Mosaic sort of worked, I included that and left my Next computer at the office.

The write up continues:

As the generations of self-produced content accumulated, the researchers watched their model’s responses degrade into delirious ramblings.

After the data were fed into the system a number of time, the output presented was like this example from the researchers’ tests:

“architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.”

The output might be helpful to those interested in church architecture.

Here’s the wrap up to the research report:

This doesn’t mean doing away with synthetic data entirely, Shumailov said, but it does mean it will need to be better designed if models built on it are to work as intended. [Note: Ilia Shumailov, a computer scientist at the University of Oxford, worked on this study.]

I must admit that the write up does not make clear what data were “real” and what data were “synthetic.” I am not sure how the test moved from Wikipedia to synthetic data. I have no idea where the headline originated? Was it synthetic?

Nevertheless, I think one might conclude that using fancy math to make up data that’s as good as real life data might produce some interesting outputs.

Stephen E Arnold, August 12, 2024

Copilot and Hackers: Security Issues Noted

August 12, 2024

dinosaur30a_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The online publication Cybernews ran a story I found interesting. It title suggests something about Black Hat USA 2024 attendees I have not considered. Here’s the headline:

Black Hat USA 2024: : Microsoft’s Copilot Is Freaking Some Researchers Out

Wow. Hackers (black, gray, white, and multi-hued) are “freaking out.” As defined by the estimable Urban Dictionary, “freaking” means:

Obscene dancing which simulates sex by the grinding the of the genitalia with suggestive sounds/movements. often done to pop or hip hop or rap music

No kidding? At Black Hat USA 2024?

image

Thanks, Microsoft Copilot. Freak out! Oh, y0ur dance moves are good enough.

The article reports:

Despite Microsoft’s claims, cybersecurity researcher Michael Bargury demonstrated how Copilot Studio, which allows companies to build their own AI assistant, can be easily abused to exfiltrate sensitive enterprise data. We also met with Bargury during the Black Hat conference to learn more. “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications,” he said. His view is that Microsoft will fix vulnerabilities and bugs as they arise, letting companies using their products do so at their own risk.

Wait. I thought Microsoft has tied cash to security work. I thought security was Job #1 at the company which recently accursed Delta Airlines of using outdated technology and failing its customers. Is that the Microsoft that Mr. Bargury is suggesting has zero clue how to make smart software secure?

With MSFT Copilot turning up in places that surprise me, perhaps the Microsoft great AI push is creating more problems. The SolarWinds glitch was exciting for some, but if Mr. Bargury is correct, cyber security life will be more and more interesting.

Stephen E Arnold, August 12, 2024

Apple Does Not Just Take Money from Google

August 12, 2024

In an apparent snub to Nvidia, reports MacRumors, “Apple Used Google Tensor Chips to Develop Apple Intelligence.” The decision to go with Google’s TPUv5p chips over Nvidia’s hardware is surprising, since Nvidia has been dominating the AI processor market. (Though some suggest that will soon change.) Citing Apple’s paper on the subject, writer Hartley Charlton reveals:

“The paper reveals that Apple utilized 2,048 of Google’s TPUv5p chips to build AI models and 8,192 TPUv4 processors for server AI models. The research paper does not mention Nvidia explicitly, but the absence of any reference to Nvidia’s hardware in the description of Apple’s AI infrastructure is telling and this omission suggests a deliberate choice to favor Google’s technology. The decision is noteworthy given Nvidia’s dominance in the AI processor market and since Apple very rarely discloses its hardware choices for development purposes. Nvidia’s GPUs are highly sought after for AI applications due to their performance and efficiency. Unlike Nvidia, which sells its chips and systems as standalone products, Google provides access to its TPUs through cloud services. Customers using Google’s TPUs have to develop their software within Google’s ecosystem, which offers integrated tools and services to streamline the development and deployment of AI models. In the paper, Apple’s engineers explain that the TPUs allowed them to train large, sophisticated AI models efficiently. They describe how Google’s TPUs are organized into large clusters, enabling the processing power necessary for training Apple’s AI models.”

Over the next two years, Apple says, it plans to spend $5 billion in AI server enhancements. The paper gives a nod to ethics, promising no private user data is used to train its AI models. Instead, it uses publicly available web data and licensed content, curated to protect user privacy. That is good. Now what about the astronomical power and water consumption? Apple has no reassuring words for us there. Is it because Apple is paying Google, not just taking money from Google?

Cynthia Murrell, August 12, 2024

Podcasts 2024: The Long Tail Is a Killer

August 9, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]This essay is the work of a dumb humanoid. No smart software required.

One of my Laws of Online is that the big get bigger. Those who are small go nowhere.

My laws have not been popular since I started promulgating them in the early 1980s. But they are useful to me. The write up “Golden Spike: Podcasting Saw A 22% Rise In Ad Spending In Q2 [2024].” The information in the article, if on the money, appear to support the Arnold Law articulated in the first sentence of this blog post.

image

The long tail can be a killer. Thanks, MSFT Copilot. How’s life these days? Oh, that’s too bad.

The write up contains an item of information which not surprising to those who paid attention in a good middle school or in a second year economics class. (I know. Snooze time for many students.) The main idea is that a small number of items account for a large proportion of the total occurrences.

Here’s what the article reports:

Unsurprisingly, podcasts in the top 500 attracted the majority of ad spend, with these shows garnering an average of $252,000 per month each. However, the profits made by series down the list don’t have much to complain about – podcasts ranked 501 to 3000 earned about $30,000 monthly. Magellan found seven out of the top ten advertisers from the first quarter continued their heavy investment in the second quarter, with one new entrant making its way onto the list.

This means that of the estimated three to four million podcasts, the power law nails where the advertising revenue goes.

I mention this because when I go to the gym I listen to some of the podcasts on the Leo Laporte TWIT network. At one time, the vision was to create the CNN of the technology industry. Now the podcasts seem to be the voice of the podcasts which cannot generate sufficient money from advertising to pay the bills. Therefore, hasta la vista staff, dedicated studio, and presumably some other expenses associated with a permanent studio.

Other podcasts will be hit by the stinging long tail. The question becomes, “How do these 2.9 million podcasts make money?”

Here’s what I have noticed in the last few months:

  1. Podcasters (video and voice) just quit. I assume they get a job or move in with friends. Van life is too expensive due to the cost of fuel, food, and maintenance now that advertising is chasing the winners in the long tail game.
  2. Some beg for subscribers.
  3. Some point people to their Buy Me a Coffee or Patreon page, among other similar community support services.
  4. Some sell T shirts. One popular technology podcaster sells a $60 screwdriver. (I need that.)
  5. Some just whine. (No, I won’t single out the winning whiner.)

If I were teaching math, this podcast advertising data would make an interesting example of the power law. Too bad most will be impotent to change its impact on podcasting.

Stephen E Arnold, August 9, 2024

Can AI Models Have Abandonment Issues?

August 9, 2024

Gee, it seems the next big thing may now be just … the next thing. Citing research from Gartner, Windows Central asks, “Is GenAI a Dying Fad? A New Study Predicts 30% of Investors Will Jump Ship by 2025 After Proof of Concept.” This is on top of a Reuters Institute report released in May that concluded public “interest” in AI is all hype and very few people are actually using the tools. Writer Kevin Okemwa specifies:

“[Gartner] suggests ‘at least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025.’ The firm attributes its predictions to poor data quality, a lack of guardrails to prevent the technology from spiraling out of control, and high operation costs with no clear path to profitability.”

For example, the article reminds us, generative AI leader OpenAI is rumored to be facing bankruptcy. Gartner Analyst Rita Sallam notes that, while executives are anxious for the promised returns on AI investments, many companies struggle to turn these projects into profits. Okemwa continues:

“Gartner’s report highlights the challenges key players have faced in the AI landscape, including their inability to justify the substantial resources ranging from $5 million to $20 million without a defined path to profitability. ‘Unfortunately, there is no one size fits all with GenAI, and costs aren’t as predictable as other technologies,’ added Sallam. According to Gartner’s report, AI requires ‘a high tolerance for indirect, future financial investment criteria versus immediate return on investment (ROI).’“

That must come as a surprise to those who banked on AI hype and expected massive short-term gains. Oh well. So, what will the next next big thing be?

Cynthia Murrell, August 9, 2024

DeepMind Explains Imagination, Not the Google Olympic Advertisement

August 8, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I admit it. I am suspicious of Google “announcements,” ArXiv papers, and revelations about the quantumly supreme outfit. I keep remembering the Google VP dead on a yacht with a special contract worker. I know about the Googler who tried to kill herself because a dalliance with a Big Time Google executive went off the rails. I know about the baby making among certain Googlers in the legal department. I know about the behaviors which the US Department of Justice described as “monopolistic.”

When I read “What Bosses Miss about AI,” I thought immediately about Google’s recent mass market televised advertisement about uses of Google artificial intelligence. The set up is that a father (obviously interested in his progeny) turned to Google’s generative AI to craft an electronic message to the humanoid. I know “quality time” is often tough to accommodate, but an email?

The Googler who allegedly wrote the cited essay has a different take on how to use smart software. First, most big-time thinkers are content with AI performing cost-reduction activities. AI is less expensive than a humanoid. These entities require health care, retirement, a shoulder upon which to cry (a key function for personnel in the human relations department), and time off.

Another type of big-time thinker grasps the idea that smart software can make processes more efficient. The write up describes this as the “do what we do, just do it better” approach to AI. The assumption is that the process is neutral, and it can be improved. Imagine the value of AI to Vlad the Impaler!

The third category of really Big Thinker is the leader who can use AI for imagination. I like the idea of breaking a chaotic mass of use cases into categories anchored to the Big Thinkers who use the technology.

However, I noted what I think is unintentional irony in the write up. This chart shows the non-AI approach to doing what leadership is supposed to do:

image

What happens when a really Big Thinker uses AI to zip through this type of process. The acceleration is delivered from AI. In this Googler’s universe, I think one can assume Google’s AI plays a modest role. Here’s the payoff paragraph:

Traditional product development processes are designed based on historical data about how many ideas typically enter the pipeline. If that rate is constant or varies by small amounts (20% or 50% a year), your processes hold. But the moment you 10x or 100x the front of that pipeline because of a new scientific tool like AlphaFold or a generative AI system, the rest of the process clogs up. Stage 1 to Stage 2 might be designed to review 100 items a quarter and pass 5% to Stage 2. But what if you have 100,000 ideas that arrive at Stage 1? Can you even evaluate all of them? Do the criteria used to pass items to Stage 2 even make sense now? Whether it is a product development process or something else, you need to rethink what you are doing and why you are doing it. That takes time, but crucially, it takes imagination.

Let’s think about this advice and consider the imagination component of the Google Olympics’ advertisement.

  1. Google implemented a process, spent money, did “testing,” ran the advert, and promptly withdrew it. Why? The ad was annoying to humanoids.
  2. Google’s “imagination” did not work. Perhaps this is a failure of the Google AI and the Google leadership? The advert succeeded in making Google the focal point of some good, old-fashioned, quite humanoid humor. Laughing at Google AI is certainly entertaining, but it appears to have been something that Google’s leadership could not “imagine.”
  3. The Google AI obviously reflects Google engineering choices. The parent who must turn to Google AI to demonstrate love, parental affection, and support to one’s child is, in my opinion, quite Googley. Whether the action is human or not might be an interesting topics for a coffee shop discussion. For non-Googlers, the idea of talking about what many perceived as stupid, insensitive, and inhumane is probably a non-started. Just post on social media and move on.

Viewed in a larger context, the cited essay makes it clear that Googlers embrace AI. Googlers see others’ reaction to AI as ranging from doltish to informed. Google liked the advertisement well enough to pay other companies to show the message.

I suggest the following: Google leadership should ask several AI systems if proposed advertising copy can be more economical. That’s a Stage 1 AI function. Then Google leadership should ask several AI systems how the process of creating the ideas for an advertisement can be improved. That’s a Stage 2 AI function. And, finally, Google leadership should ask, “What can we do to prevent bonkers problems resulting from trying to pretend we understand people who know nothing and care less about the three “stages” of AI understanding.

Will that help out the Google? I don’t need to ask an AI system. I will go with my instinct. The answer is, “No.”

That’s one of the challenges Google faces. The company seems unable to help itself do anything other than sell ads, promote its AI system, and cruise along in quantumly supremeness.

Stephen E Arnold, August 8, 2024

Thoughts about the Dark Web

August 8, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]This essay is the work of a dumb humanoid. No smart software required.

The Dark Web. Wow. YouTube offers a number of tell-all videos about the Dark Web. Articles explain the topics one can find on certain Dark Web fora. What’s forgotten is that the number of users of the Dark Web has been chugging along, neither gaining tens of millions of users or losing tens of millions of users. Why? Here’s a traffic chart from the outfit that sort of governs The Onion Router:

image

Source: https://metrics.torproject.org/userstats-relay-country.html

The chart is not the snappiest item on the sprawling Torproject.org Web site, but the message seems to be that TOR has been bouncing around two million users this year. Go back in time and the number has increased, but not much. Online statistics, particularly those associated with obfuscation software, are mushy. Let’s toss in another couple million users to account for alternative obfuscation services. What happens? We are not in the tens of millions.

Our research suggests that the stability of TOR usage is due to several factors:

  1. The hard core bad actors comprise a small percentage of the TOR usage and probably do more outside of TOR than within it. In September 2024 I will be addressing this topic at a cyber fraud conference.
  2. The number of entities indexing the “Dark Web” remains relatively stable. Sure, some companies drop out of this data harvesting but the big outfits remain and their software looks a lot like a user, particularly with some of  the wonky verification Dark Web sites use to block automated collection of data.
  3. Regular Internet users don’t pay much attention to TOR, including those with the one-click access browsers like Brave.
  4. Human investigators are busy looking and interacting, but the numbers of these professionals also remains relatively stable.

To sum up, most people know little about the Dark Web. When these individuals figure out how to access a Web site advertising something exciting like stolen credit cards or other illegal products and services, they are unaware of a simple fact: An investigator from some country maybe operating like a bad actor to find a malefactor. By the way, the Dark Web is not as big as some cyber companies assert. The actual number of truly bad Dark Web sites is fewer than 100, based on what my researchers tell me.

image

A very “good” person approaches an individual who looks like a very tough dude. The very “good” person has a special job for the touch dude. Surprise! Thanks, MSFT Copilot. Good enough and you should know what certain professionals look like.

I read “Former Pediatrician Stephanie Russell Sentenced in Murder Plot.” The story is surprisingly not that unique. The reason I noted a female pediatrician’s involvement in the Dark Web is that she lives about three miles from my office. The story is that the good doctor visited the Dark Web and hired a hit man to terminate an individual. (Don’t doctors know how to terminate as part of their studies?)

The write up reports:

A Louisville judge sentenced former pediatrician Stephanie Russell to 12 years in prison Wednesday for attempting to hire a hitman to kill her ex-husband multiple times.

I love the somewhat illogical phrase “kill her ex-husband multiple times.”

Russell pleaded guilty April 22, 2024, to stalking her former spouse and trying to have him killed amid a protracted custody battle over their two children. By accepting responsibility and avoiding a trial, Russell would have expected a lighter prison sentence. However, she again tried to find a hitman, this time asking inmates to help with the search, prosecutors alleged in court documents asking for a heftier prison sentence.

One rumor circulating at the pub which is a popular lunch spot near the doctor’s former office is that she used the Dark Web and struck up an online conversation with one of the investigators monitoring such activity.

Net net: The Dark Web is indeed interesting.

Stephen E Arnold, August 8, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta