A Handy Resource: 100 AI Tools in 10 Categories
February 27, 2025
We hear a lot about the most prominent AI tools like ChatGPT, Dall-E, and Grammarly. But there are many more options designed for a wide range of tasks. Inspiration blogger Ayo-Ibidapo has rounded up "100 AI Toos for Every Need: The Ultimate List." He succinctly introduces his list by observing:
"AI is revolutionizing industries, making tasks easier, faster, and more efficient. Whether you need AI for writing, design, marketing, coding, or personal productivity, there’s a tool for you. Here’s a list of 100 AI tools categorized by their purpose."
The 10 categories include those above and more, including my favorite, "Miscellaneous and Fun." As a life-long gamer, I am drawn to AI Dungeon. I am not so sure about the face-swapping tool, Reface AI. Seems a bit creepy. I am curious whether any of the investing tools, like Alpaca, Kavout, or Trade Ideas could actually boost one’s portfolio. And I am pleased to see the esteemed Wolfram Alpha made the list in the education and research section. As for the ten entries under healthcare and wellness, I wonder: are we resigned to sharing our most intimate details with bots? Ginger AI, for mental health support, sounds non-threatening, but are there any data-grubbing details buried in its terms of service agreement?
See the post for all 100 tools. If that is not enough, check out the discussion at Battle Station, "Uncover 30,000+ AI Apps Using AITrendyTools." There’s an idea—what better to pick an AI tool than an AI tool?
Cynthia Murrell, February 27, 2025
Meta and Torrents: True, False, or Rationalization?
February 26, 2025
AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.
Will this happen? Obviously not.
Why?
Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.
The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.
Here’s a passage from the cited article I found interesting:
“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”
If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.
Whitney Grace, February 26, 2025
Innovation: It Ebbs, It Flows, It Fizzles
February 26, 2025
Many would argue humanity is nothing if not creative. If not, we would be living the way we were thousands of years ago. But, asks the Financial Times, "Is Innovation Slowing Down? With Matt Clancy." Nah—Look how innovative iPhones and Windows upgrades are.
The post presents the audio of an interview between journalist John Burn-Murdoch and economist Matt Clancy. (The transcript can be found here.) The page introduces the interview:
"Productivity growth in the developed world has been on a downward trend since the 1960s. Meanwhile, gains in life expectancy have also slowed. And yet the number of dollars and researchers dedicated to R&D grows every year. In today’s episode, the FT’s Chief Data Reporter, John Burn-Murdoch, asks whether western culture has lost its previous focus on human progress and become too risk-averse, or whether the problem is simply that the low-hanging fruit of scientific research has already been plucked. He does so in conversation with innovation economist Matt Clancy, who is the author of the New Things Under the Sun blog, and a research fellow at Open Philanthropy, a non-profit foundation based in San Francisco that provides research grants."
The pair begin by recalling a theory of economic historian Joel Mokyr, who believes a growing belief in human progress and experimentation led to the Industrial Revolution. The perspective, believes Clancy, is supported by a 2023 study that examined thousands of political and scientific books from the 1500s–1700s. That research shows a growing interest in progress during that period. Sounds plausible.
But now, we learn, innovation appears to be in decline. Research output per scientist has decreased since 1960, despite increased funding. Productivity growth and technological output are also slowing. Is this because our culture has grown less interested in invention? To hear Clancy tell it, probably not. A more likely suspect is what economist Ben Jones dubbed the Burden of Knowledge. Basically, as humanity makes discoveries that build on each other, each human scientist has more to learn before they can contribute new ideas. This also means more individual specialization and more teamwork. Of course, adding meetings to the mix slows everything down.
The economist has suggestions, like funding models that reward risk-taking. He also believes artificial intelligence will significantly speed things up. Probably—but will it send us careening down the wrong paths? AI will have to get far better at not making mistakes, or making stuff up, before we should trust it at the helm of human progress.
Cynthia Murrell, February 26, 2025
AI Research Tool from Perplexity Is Priced to Undercut the Competition
February 26, 2025
Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:
"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."
Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:
"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."
Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.
We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?
Cynthia Murrell, February 26, 2025
Researchers Raise Deepseek Security Concerns
February 25, 2025
What a shock. It seems there are some privacy concerns around Deepseek. We learn from the Boston Herald, “Researchers Link Deepseek’s Blockbuster Chatbot to Chinese Telecom Banned from Doing Business in US.” Former Wall Street Journal and now AP professional Byron Tau writes:
“The website of the Chinese artificial intelligence company Deepseek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of Deepseek’s chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company.”
If this is giving you déjà vu, dear reader, you are not alone. This scenario seems much like the uproar around TikTok and its Chinese parent company ByteDance. But it is actually worse. ByteDance’s direct connection to the Chinese government is, as of yet, merely hypothetical. China Mobile, on the other hand, is known to have direct ties to the Chinese military. We learn:
“The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing ‘substantial’ national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.”
It was Canadian cybersecurity firm Feroot Security that discovered the code. The AP then had the findings verified by two academic cybersecurity experts. Might similar code be found within TikTok? Possibly. But, as the article notes, the information users feed into Deepseek is a bit different from the data TikTok collects:
“Users are increasingly putting sensitive data into generative AI systems — everything from confidential business information to highly personal details about themselves. People are using generative AI systems for spell-checking, research and even highly personal queries and conversations. The data security risks of such technology are magnified when the platform is owned by a geopolitical adversary and could represent an intelligence goldmine for a country, experts warn.”
Interesting. But what about CapCut, the ByteDance video thing?
Cynthia Murrell, February 25, 2025
Musings on AI UI Design
February 25, 2025
The advent of AI has send UI designers back to the drawing tablet. Tech product designer and blogger Patrick Morgan considers "8 Design Breakthroughs Defining AI’s Future." As when touch-based devices became common, he asserts, design choices made now will shape the ways we interact with technology for years to come. Morgan writes:
"For the first time in over a decade, we’re facing a truly greenfield space in user experience design. There’s no playbook, no established patterns to fall back on. Even the frontier AI labs are learning through experimentation, watching to see what resonates as they introduce new ways to interact. … It’s fascinating to watch these design choices ripple across the ecosystem in real-time. When something works, competitors rush to adopt it — not out of laziness, but because we’re all collectively discovering what makes sense in this new paradigm. In this wild-west moment, new dominant patterns are emerging. Today, I want to highlight the breakthroughs that have captured my imagination the most — the design choices shaping our collective understanding of AI interaction."
The roundup include obvious choices—conversational paradigms like ChatGPT’s interface and voice input systems in general. Morgan also admires integration a la Cursor IDE and Claude Artifacts, and he
appreciates the helpful Grok button alongside content on X. He gives kudos for transparency, like Perplexity’s real-time citations and Deepseek’s process descriptions. Morgan even gives credit to MidJourney for refusing to build its own UI until it had refined its core technology. He reflects:
"These eight breakthroughs aren’t just clever UI decisions — they’re the first chapters in a new story about how humans and machines work together. Each represents a moment when someone dared to experiment, to try something unproven, and found a pattern that resonated."
Yes. And also: Ultimately, AI will be invisible—embedded and out of sight, outputting information. Interfaces undergo constant change by people with time on their hands. UI changes should not distract from the actual trajectory of smart and smarter software. Where do we stand on bias, hallucinations, privacy, and accountability? Those, we believe, are the more pertinent questions. But, sure, UI choices are nifty to observe.
Cynthia Murrell, February 25, 2025
Rest Easy. AI Will Not Kill STEM Jobs
February 25, 2025
Written by a dinobaby, not smart software. But I would replace myself with AI if I could.
Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.
Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.
The Bureau of Labor Statistics says:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)
Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:
- Recent college graduates with computer science degrees
- Recently terminated US government workers from agencies like 18F
- Workers over 55 urged to take early retirement?
The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:
Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.
What am I to conclude from these US government data? Here are my preliminary thoughts:
- The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
- The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
- The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.
Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.
Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.
Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.
Stephen E Arnold, February 25, 2025
Content Injection Can Have Unanticipated Consequences
February 24, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
Years ago I gave a lecture to a group of Swedish government specialists affiliated with the Forestry Unit. My topic was the procedure for causing certain common algorithms used for text processing to increase the noise in their procedures. The idea was to input certain types of text and numeric data in a specific way. (No, I will not disclose the methods in this free blog post, but if you have a certain profile, perhaps something can be arranged by writing benkent2020 at yahoo dot com. If not, well, that’s life.)
We focused on a handful of methods widely used in what now is called “artificial intelligence.” Keep in mind that most of the procedures are not new. There are some flips and fancy dancing introduced by individual teams, but the math is not invented by TikTok teens.
In my lecture, the forestry professionals wondered if these methods could be used to achieve specific objectives or “ends”. The answer was and remains, “Yes.” The idea is simple. Once methods are put in place, the algorithms chug along, some are brute force and others are probabilistic. Either way, content and data injections can be shaped, just like the gizmos required to make kinetic events occur.
The point of this forestry excursion is to make clear that a group of people, operating in a loosely coordinated manner can create data or content. Those data or content can be weaponized. When ingested by or injected into a content processing flow, the outputs of the larger system can be fiddled: More emphasis here, a little less accuracy there, and an erosion of whatever “accuracy” calculations are used to keep the system within the engineers’ and designers’ parameters. A plebian way to describe the goal: Disinformation or accuracy erosion.
I read “Meet the Journalists Training AI Models for Meta and OpenAI.” The write up explains that journalists without jobs or in search of extra income are creating “content” for smart software companies. The idea is that if one just does the Silicon Valley thing and sucks down any and all content, lawyers might come calling. Therefore, paying for “real” information is a better path.
Please, read the original article to get a sense of who is doing the writing, what baggage or mind set these people might bring to their work.
If the content is distorted — either intentionally or unintentionally — the impact of these content objects on the larger smart software system might have some interesting consequences. I just wanted to point out that weaponized information can have an impact. Those running smart software and buying content assuming it is just fine, might find some interesting consequences in the outputs.
Stephen E Arnold, February 24, 2025
AI Worriers, Play Some Bing Crosby Music
February 24, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
The Guardian newspaper ran an interesting write up about smart software and the inevitability of complaining to stop it in its tracks. “I Met the Godfathers of AI in Paris – Here’s What They Told Me to Really Worry About.” I am not sure what’s being taught in British schools, but the headline features the author, a split infinitive, and the infamous “ending a sentence with a preposition” fillip. Very sporty.
The write up includes quotes from the godfathers:
“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”
I am not sure what psychologists call worrying about the future. Bing Crosby took a different approach. He sang, “Don’t worry about tomorrow” and offered:
Why should we cling to some old faded thing
That used to be
Bing looked beyond the present but did not seem unduly worried. The Guardian is a bit more up tight.
The write up says:
The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio [an AI godfather, according to the Guardian] pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.
I circled this passage:
It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, “to make sure there is a culture of participation embedded in AI development in general”, as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it.
Okay, so what’s the fix? Who implements the fix? Will the fix stop British universities in Manchester, Cambridge, and Oxford among others from teaching about AI or stop researchers from fiddling with snappier methods? Will the Mayor of London shut down the DeepMind outfit?
Nope. I am delighted that some people are talking about smart software. However, in the high tech world in which we love, I want to remind the Guardian, the last train for Chippenham has left the station. Too late, old chap. Learn to play Bing’s song. Chill.
Stephen E Arnold, February 24, 2025
Advice for Programmers: AI-Proof Your Career
February 24, 2025
Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.
Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.
For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.
In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:
"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."
If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.
Cynthia Murrell, February 24, 2025