Microsoft Explains Who Is at Fault If Copilot Smart Software Does Dumb Things
September 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Those Windows Central experts have delivered a Dusie of a write up. “Microsoft Says OpenAI’s ChatGPT Isn’t Better than Copilot; You Just Aren’t Using It Right, But Copilot Academy Is Here to Help” explains:
Avid AI users often boast about ChatGPT’s advanced user experience and capabilities compared to Microsoft’s Copilot AI offering, although both chatbots are based on OpenAI’s technology. Earlier this year, a report disclosed that the top complaint about Copilot AI at Microsoft is that “it doesn’t seem to work as well as ChatGPT.”
I think I understand. Microsoft uses OpenAI, other smart software, and home brew code to deliver Copilot in apps, the browser, and Azure services. However, users have reported that Copilot doesn’t work as well as ChatGPT. That’s interesting. A hallucinating capable software processed by the Microsoft engineering legions is allegedly inferior to Copilot.
Enthusiastic young car owners replace individual parts. But the old car remains an old, rusty vehicle. Thanks, MSFT Copilot. Good enough. No, I don’t want to attend a class to learn how to use you.
Who is responsible? The answer certainly surprised me. Here’s what the Windows Central wizards offer:
A Microsoft employee indicated that the quality of Copilot’s response depends on how you present your prompt or query. At the time, the tech giant leveraged curated videos to help users improve their prompt engineering skills. And now, Microsoft is scaling things a notch higher with Copilot Academy. As you might have guessed, Copilot Academy is a program designed to help businesses learn the best practices when interacting and leveraging the tool’s capabilities.
I think this means that the user is at fault, not Microsoft’s refactored version of OpenAI’s smart software. The fix is for the user to learn how to write prompts. Microsoft is not responsible. But OpenAI’s implementation of ChatGPT is perceived as better. Furthermore, training to use ChatGPT is left to third parties. I hope I am close to the pin on this summary. OpenAI just puts Strawberries in front of hungry users and let’s them gobble up ChatGPT output. Microsoft fixes up ChatGPT and users are allegedly not happy. Therefore, Microsoft puts the burden on the user to learn how to interact with the Microsoft version of ChatGPT.
I thought smart software was intended to make work easier and more efficient. Why do I have to go to school to learn Copilot when I can just pound text or a chunk of data into ChatGPT, click a button, and get an output? Not even a Palantir boot camp will lure me to the service. Sorry, pal.
My hypothesis is that Microsoft is a couple of steps away from creating something designed for regular users. In its effort to “improve” ChatGPT, the experience of using Copilot makes the user’s life more miserable. I think Microsoft’s own engineering practices act like a struck brake on an old Lada. The vehicle has problems, so installing a new master cylinder does not improve the automobile.
Crazy thinking: That’s what the write up suggests to me.
Stephen E Arnold, September 23, 2024
DAIS: A New Attempt to Make AI Play Nicely with Humans
September 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
How about a decentralized artificial intelligence “association”? One has been set up by Michael Casey, the former chief content officer at Coindesk. (Coindesk reports about the bright, sunny world of crypto currency and related topics.) I learned about this society in — you guessed it — Coindesk’s online information service called Coindesk. The article “Decentralized AI Society Launched to Fight Tech Giants Who ‘Own the Regulators’” is interesting. I like the idea that “tech giants” own the regulators. This is an observation which Apple and Google might not agree. Both “tech giants” have been facing some unfavorable regulatory decisions. If these regulators are “owned,” I think the “tech giants” need to exercise their leadership skills to make the annoying regulators go away. One resigned in the EU this week, but as Shakespeare said of lawyers, let’s drown them. So far the “tech giants” have been bumbling along, growing bigger as a result of feasting on data and amplifying allegedly monopolistic behaviors which just seem to pop up, rules or no rules.
Two experts look at what emerged from a Petri dish of technological goodies. Quite a surprise I assume. Thanks, MSFT Copilot. Good enough.
The write up reports:
Industry leaders have launched a non-profit organization called the Decentralized AI Society (DAIS), dedicated to tackling the probability of the monopolization of the artificial intelligence (AI) industry.
What is the DAIS outfit setting out to do? Here’s what Coindesk reports and this is a quote of the bullets from the write up:
Bringing capital to the decentralized AI world in what has already become an arms race for resources like graphical processing units (GPUs) and the data centers that compute together.
Shaping policy to craft AI regulations.
Education and promotion of decentralized AI.
Engineering to create new algorithms for learning models in a distributed way.
These are interesting targets. I want to point out that “decentralization” is the opposite of what the “tech giants” have already put in place; that is, concentration of money, talent, and infrastructure. Even old dogs like Oracle are now hopping on the centralized bandwagon. Even newcomers want to get as many cattle into the killing chute before the glamor of AI begins to lose some sparkles.
Several observations:
- DAIS has some crypto roots. These may become positive or negative. Right now regulators are interested in crypto as are other enforcement entities
- One of the Arnold Laws of Online is that centralization, consolidation, and concentration are emergent behaviors for online products and services. Countering this “law” and its “emergent” functionality is going to take more than conferences, a Web site, and some “logical” ideas which any “rational” person would heartily endorse. But emergent is tough to stop based on my experience.
- Singapore has become a hot spot for certain financial and technical activities. The problem is that nation-states may not want to be inhibited in their AI ambitions. Some may find the notion of “education” a problem as well because curricula must conform to pre-defined frameworks. Distributed is not a pre-defined anything; it is the opposite of controlled and, therefore, likely to be a bit of a problem.
Net net: Interesting idea. But Amazon, Google, Facebook, Microsoft, and some other outfits may want to talk about “distributed” but really mean the technological notion is okay, but we want as much of the money as we can get.
Stephen E Arnold, September 20, 2024
YouTube Is Bringing More AI To Its Platform
September 20, 2024
AI-generated videos have already swarmed on YouTube. These videos range from fake Disney movie trailers to inappropriate content that missed being flagged. YouTube creators are already upset that their videos are being overlooked by the algorithm, but some are being hired for an AI project. Digital Trends explains more: “More AI May Be Coming To YouTube In A Big Way.”
Gemini AI is currently in beta testing across YouTube. Gemini AI is described as a tool for YouTubers to brainstorm video ideas, including titles, topics, and thumbnails. Only a select few YouTubers are testing Gemini AI and will share their feedback. The AI tool will eventually be located underneath the platform’s analytic menu, under the research tab. The tool could actually be helpful:
“This marks Google’s second foray into including AI assistance in YouTube users’ creative processes. In May, the company launched a content inspiration tool on YouTube Studio that provides tips and suggestions for future clip topics based on viewer trends. For most any given topic, the AI will highlight related videos you’ve already published, provide tips on themes to use, and generate a script outline for you to follow.”
The YouTubers are experimenting with both Gemini AI and the content inspiration tool. They’re doing A/B testing and their experiences will shape how AI is used on the video platform. YouTube does acknowledge that AI is a transformative creative tool, but viewers want to know if what they’re watching is real or fake. Is anyone imagining a AI warning or rating system?
Whitney Grace, September 20, 2024
Happy AI News: Job Losses? Nope, Not a Thing
September 19, 2024
This essay is the work of a dumb humanoid. No smart software required.
I read “AI May Not Steal Many Jobs after All. It May Just Make Workers More Efficient.” Immediately two points jumped out at me. The AP (the publisher of the “real” news story is hedging with the weasel word “may” and the hedgy phrase “after all.” Why is this important? The “real” news industry is interested in smart software to reduce costs and generate more “real” news more quickly. The days with “real” reporters disappearing for hours to confirm with a source are often associated with fiddling around. The costs of doing anything without a gusher of money pumping 24×7 are daunting. The word “efficient” sits in the headline as a digital harridan stakeholder. Who wants that?
The manager of a global news operation reports that under his watch, he has achieved peak efficiency. Thanks, MSFT Copilot. Will this work for production software development? Good enough is the new benchmark, right?
The story itself strikes me as a bit of content marketing which says, “Hey, everyone can use AI to become more efficient.” The subtext is, “Hey, don’t worry. No software robot or agentic thingy will reduce staff. Probably.
The AP is a litigious outfit even though I worked at a newspaper which “participated” in the business process of the entity. Here’s one sentence from the “real” news write up:
Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
Yep, just like the steam engine and the Internet.
When technologies emerge, most go away or become componentized or dematerialized. When one of those hot technologies fail to produce revenues, quite predictable outcomes result. Executives get fired. VC firms do fancy dancing. IRS professionals squint at tax returns.
So far AI has been a “big guys win sort of because they have bundles of cash” and “little outfits lose control of their costs”. Here’s my take:
- Human-generated news is expensive and if smart software can do a good enough job, that software will be deployed. The test will be real time. If the software fails, the company may sell itself, pivot, or run a garage sale.
- When “good enough” is the benchmark, staff will be replaced with smart software. Some of the whiz kids in AI like the buzzword “agentic.” Okay, agentic systems will replace humans with good enough smart software. That will happen. Excellence is not the goal. Money saving is.
- Over time, the ideas of the current transformer-based AI systems will be enriched by other numerical procedures and maybe— just maybe — some novel methods will provide “smart software” with more capabilities. Right now, most smart software just finds a path through already-known information. No output is new, just close to what the system’s math concludes is on point. Right now, the next generation of smart software seems to be in the future. How far? It’s anyone’s guess.
My hunch is that Amazon Audible will suggest that humans will not lose their jobs. However, the company is allegedly going to replace human voices with “audibles” generated by smart software. (For more about this displacement of humans, check out the Bloomberg story.)
Net net: The “real” news story prepares the field for planting writing software in an organization. It says, “Customer will benefit and produce more jobs.” Great assertions. I think AI will be disruptive and in unpredictable ways. Why not come out and say, “If the agentic software is good enough, we will fire people”? Answer: Being upfront is not something those who are not dinobabies do.
Stephen E Arnold, September 19, 2024
Smart Software: More Novel and Exciting Than a Mere Human
September 17, 2024
This essay is the work of a dumb humanoid. No smart software required.
Idea people: What a quaint notion. Why pay for expensive blue-chip consultants or wonder youth from fancy universities? Just use smart software to generate new, novel, unique ideas. Does that sound over the top? Not according to “AIs Generate More Novel and Exciting Research Ideas Than Human Experts.” Wow, I forgot exciting. AI outputs can be exciting to the few humans left to examine the outputs.
The write up says:
Recent breakthroughs in large language models (LLMs) have excited researchers about the potential to revolutionize scientific discovery, with models like ChatGPT and Anthropic’s Claude showing an ability to autonomously generate and validate new research ideas. This, of course, was one of the many things most people assumed AIs could never take over from humans; the ability to generate new knowledge and make new scientific discoveries, as opposed to stitching together existing knowledge from their training data.
Aside from having no job and embracing couch surfing or returning to one’s parental domicile, what are the implications of this bold statement? It means that smart software is better, faster, and cheaper at producing novel and “exciting” research ideas. There is even a chart to prove that the study’s findings are allegedly reproducible. The graph has whisker lines too. I am a believer… sort of.
The magic of a Bonferroni correction which allegedly copes with data from multiple dependent or independent statistical tests are performed in one meta-calculation. Does it work? Sure, a fancy average is usually close enough for horseshoes I have heard.
Just keep in mind that human judgments are tossed into the results. That adds some of that delightful subjective spice. The proof of the “novelty” creation process, according to the write up comes from Google. The article says:
…we can’t understate AI’s potential to radically accelerate progress in certain areas – as evidenced by Deepmind’s GNoME system, which knocked off about 800 years’ worth of materials discovery in a matter of months, and spat out recipes for about 380,000 new inorganic crystals that could have revolutionary potential in all sorts of areas. This is the fastest-developing technology humanity has ever seen; it’s reasonable to expect that many of its flaws will be patched up and painted over within the next few years. Many AI researchers believe we’re approaching general superintelligence – the point at which generalist AIs will overtake expert knowledge in more or less all fields.
Flaws? Hallucinations? Hey, not to worry. These will be resolved as the AI sector moves with the arrow of technology. Too bad if some humanoids are pierced by the arrow and die on the shoulder of the uncaring Information Superhighway. What about those who say AI will not take jobs? Have those people talked with an accountants responsible for cost control?
Stephen E Arnold, September 17, 2024
Trust AI? Obvious to Those Who Do Not Want to Think Too Much
September 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who wants to evaluate information? The answer: Not too many people. In my lectures, I show a diagram of the six processes an analyst or investigator should execute. The reality is that several of the processes are difficult which means time and money are required to complete the processes in a thorough manner. Who has time? The answer: Not too many people or organizations.
What’s the solution? The Engineer’s article “Study Shows Alarming Level of Trust in AI for Life and Death Decisions” reports:
A US study that simulated life and death decisions has shown that humans place excessive trust in artificial intelligence when guiding their choices.
Interesting. Perhaps China is the poster child for putting “trust” in smart software hooked up to nuclear weapons? Fortune reported on September 10, 2024, that China has refused to sign an agreement to ban smart software from controlling nuclear weapons.
Yep, I trust AI, don’t you? Thanks, MSFT Copilot. I trusted you to do a great job. What did you deliver? A good enough cartoon.
The study reported in The Engineer might be of interest to some in China. Specifically, the write up stated:
Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI. The work, conducted by scientists at the University of California – Merced.
Are these results on point? My experience suggests that not only do people accept the outputs of a computer as “correct.” Many people when shown facts that contradict the computer output defend the computer as more reliable and accurate.
I am not quite such a goose. Machines and software generate errors. The systems have for decades. But I think the reason is that the humans with whom I have interacted pursue convenience. Verifying, analyzing, and thinking are hot processes. Humans want to kick back in cool, low humidity environments and pursue the least effort path in many situations.
The illusion of computer accuracy allows people to skip reviewing their Visa statement and doubting the validity of an output displayed in a spreadsheet. The fact that the smart software hallucinates is ignored. I hear “I know when the system needs checking.” Yeah, sure you do.
Those involved in preparing the study are quoted as saying:
“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” said Holbrook. “We should have a healthy skepticism about AI, especially in life-or-death decisions. “We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another. We can’t assume that. These are still devices with limited abilities.”
These folks are not going to be hired to advise the Chinese government I surmise.
Stephen E Arnold, September 16, 2024
Need Help, Students? AI Is Here
September 13, 2024
Here is a resource for, well, for those who would cheat maybe? The site Pisi.ee shares information on a course called, “How to Use AI to Write a Research Paper.” Hosted by Fikper.com, the course is designed for “high school, college, and university students who are eager to improve their research and writing skills through the use of artificial intelligence.” Research, right. Wink, wink. The course description specifies:
“Whether you’re a high school student tackling your first research project, a college student refining your academic skills, or a university scholar pursuing advanced studies, understanding how to leverage AI can significantly enhance your efficiency and effectiveness. This course offers a comprehensive guide to integrating AI tools into your research process, providing you with the knowledge and skills to excel. Many students struggle with the task of conducting research and writing about it. Identifying a research problem, creating clear questions, looking for other literature, and keeping your academic integrity are a challenge, especially with all the information available. This course addresses these challenges head-on, providing step-by-step guidance and practical exercises that lead you through the research process. What sets this course apart from others is its practical, hands-on approach combined with a strong emphasis on academic integrity.”
A strong emphasis on integrity, you say? Well that is different then. All the tools one may need to generate, er, research papers are covered:
“Tools like Zotero, Mendeley, Grammarly, Hemingway App, IBM Watson, Google Scholar, Turnitin, Copyscape, EndNote, and QuillBot can be used at different stages of the research process. Our goal is to give you a toolkit of resources that you can choose to apply, making your research and writing tasks more efficient and effective.”
Yep, just what aspiring students need to gain that “competitive edge,” as the description puts it. With integrity, of course.
Cynthia Murrell, September 13, 2024
US Government Procurement: Long Live Silos
September 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Defense AI Models A Risk to Life Alleges Spurned Tech Firm.” Frankly , the headline made little sense to me so I worked through what is a story about a contractor who believes it was shafted by a large consulting firm. In my experience, the situation is neither unusual nor particularly newsworthy. The write up does a reasonable job of presenting a story which could have been titled “Naive Start Up Smoked by Big Consulting Firm.” A small high technology contractor with smart software hooks up with a project in the Department of Defense. The high tech outfit is not able to meet the requirements to get the job. The little AI high tech outfit scouts around and brings a big consulting firm to get the deal done. After some bureaucratic cycles, the small high tech outfit is benched. If you are not familiar with how US government contracting works, the write up provides some insight.
The work product of AI projects will be digital silos. That is the key message of this procurement story. I don’t feel sorry for the smaller company. It did not prepare itself to deal with the big time government contractor. Outfits are big for a reason. They exploit opportunities and rarely emulate Mother Theresa-type behavior. Thanks, MSFT Copilot. Good enough illustration although the robots look stupid.
For me, the article is a stellar example of how information or or AI silos are created within the US government. Smart software is hot right now. Each agency, each department, and each unit wants to deploy an AI enabled service. Then that AI infused service becomes (one hopes) an afterburner for more money with which one can add headcount and more AI technology. AI is a rare opportunity to become recognized as a high-performance operator.
As a result, each AI service is constructed within a silo. Think about a structure designed to hold that specific service. The design is purpose built to keep rats and other vermin from benefiting from the goodies within the AI silo. Despite the talk about breaking down information silos, silos in a high profile, high potential technical are like artificial intelligence are the principal product of each agency, each department, and each unit. The payoff could be a promotion which might result in a cushy job in the commercial AI sector or a golden ring; that is, the senior executive service.
I understand the frustration of the small, high tech AI outfit. It knows it has been played by the big consulting firm and the procurement process. But, hey, there is a reason the big consulting firm generates billions of dollars in government contracts. The smaller outfit failed to lock down its role, retain the key to the know how it developed, and allowed its “must have cachè” to slip away.
Welcome, AI company, to the world of the big time Beltway Bandit. Were you expecting the big time consulting firm to do what you wanted? Did you enter the deal with a lack of knowledge, management sophistication, and a couple of false assumptions? And what about the notion of “algorithmic warfare”? Yeah, autonomous weapons systems are the future. Furthermore, when autonomous systems are deployed, the only way they can be neutralized is to use more capable autonomous weapons. Does this sound like a reply of the logic of Cold War thinking and everyone’s favorite bedtime read On Thermonuclear War still available on Amazon and as of September 6, 2024, on the Internet Archive at this link.
Several observations are warranted:
- Small outfits need to be informed about how big consulting companies with billions in government contracts work the system before exchanging substantive information
- The US government procurement processes are slow to change, and the Federal Acquisition Regulations and related government documents provide the rules of the road. Learn them before getting too excited about a request for a proposal or Federal Register announcement
- In a fight with a big time government contractor make sure you bring money, not a chip on your shoulder, to the meeting with attorneys. The entity with the most money typically wins because legal fees are more likely to kill a smaller firm than any judicial or tribunal ruling.
Net net: Silos are inherent in the work process of any government even those run by different rules. But what about the small AI firm’s loss of the contract? Happens so often, I view it as a normal part of the success workflow. Winners and losers are inevitable. Be smarter to avoid losing.
Stephen E Arnold, September 12, 2024
How Will Smart Cars Navigate Crowded Cityscapes When People Do Humanoid Things?
September 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who collided in San Francisco on July 6, 2024? (No, not the February 2024 incident. Yes, I know it is easy to forget such trivial incidents) Did the Googley Waymo vehicle (self driving and smart, of course) bump into the cyclist? Did the cyclist decide to pull an European Union type stunt and run into the self driving car?
If the legal outcome of this San Francisco autonomous car – bicycle incident goes in favor of the bicyclist, autonomous vehicles will have to be smart enough to avoid situations like the one shown in the ChatGPT cartoon. Microsoft Copilot would not render the image. When I responded, “What?” the Copilot hung. Great stuff.
The question is important for insurance, publicity, and other monetary reasons. A good offense is the best defense, someone said. “Waymo Cites Possible Intentional Contact by a Bicyclist to Robotaxi in S.F.” reports:
While the robotaxi was stopped, the cyclist passed in front of it and appeared to dismount, according to the documents. “The cyclist then reached out a hand and made contact with the front passenger side of the stationary Waymo AV (autonomous vehicle), backed the bicycle up slightly, dropped the bicycle, then fell to the ground,” the documents said. The cyclist received medical treatment at the scene and was transported to the hospital, according to the documents. The Waymo vehicle was not damaged during the incident.
In my view, this is the key phrase in the news report:
In the documents, Waymo said it was submitting the report because of the alleged crash and because the cyclist influenced the driving task of the AV and was transported to the hospital, even though the incident “may involve intentional contact by the bicyclist with the Waymo AV and the occurrence of actual impact between the Waymo AV and cycle is not clear.”
We have doubt, reasonable doubt obviously. Googley Waymo is definitely into reasoning. And we have the word pair “intentional contact.” Okay, to me this means, the smart Waymo vehicle did nothing wrong. A human — chock full of possibly malicious if not criminal intent — created a TikTok moment. It is too bad there is no video of the incident. Even my low ball Hyundai records what’s in front of it. Doesn’t the Googley Waymo do that with its array of Star Wars adornments, sensors, probes, and other accoutrements of Googley Waymo vehicles? Guess not.) But the autonomous vehicle had something that could act in an intelligent manner: A human test driver.
What was that person’s recollection of the incident? The news story reports that the Googley Waymo outfit “did not immediately respond to a request for further comment on the incident.”
Several observations:
- The bike riding human created the accident with a parked Waymo super intelligent vehicle and test driver in command
- The Waymo outfit did not want to talk to the San Francisco Chronicle reporter or editor. (I used to work at a newspaper, and I did not like to talk to the editors and news professionals either.)
- Autonomous cars are going to have to be equipped with sufficiently expert AI systems to avoid humans who are acting in a way to convert Googley Waymo services into a source of revenue. Failing that, I anticipate more kinetic interactions between Googley smart cars and humanoids not getting paid to ride shotgun on smart software.
Net net: How long have big time technology companies trying to get autonomous vehicles to produce cash, not liabilities?
Stephen E Arnold, September 11, 2024
Too Bad Google and OpenAI. Perplexity Is a Game Changer, Says Web Pro News!
September 10, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have tested a number of smart software systems. I can say, based on my personal experience, none is particularly suited to my information needs. Keep in mind that I am a dinobaby, more at home in a research library or the now-forgotten Dialog command line. ss cc=7900, thank you very much.
I worked through the write up “Why Perplexity AI Is (Way) Better Than Google: A Deep Dive into the Future of Search.” The phrase “Deep Dive’ reminded me of a less-than-overwhelming search service called Deepdyve. (I just checked and, much to my surprise, the for-fee service is online at https://www.deepdyve.com/. Kudos, Deepdyve, which someone told me was a tire kicker or maybe more with the Snorkle system. (I could look it up using a smart software system, but performance is crappy today, and I don’t want to get distracted from the Web Pro News pronouncement. But that smart software output requires a lot of friction; that is, verifying that the outputs are accurate.)
A dinobaby (the author of this blog post) works in a library. Thanks, MSFT Copilot, good enough.
Here’s the subtitle to the article. Its verbosity smacks of that good old and mostly useless search engine optimization tinkering:
Perplexity AI is not just a new contender; it’s a game-changer that could very well dethrone Google in the years to come. But what exactly makes Perplexity AI better than Google? Let’s explore the…
No, I didn’t truncate the subtitle. That’s it.
The write up explains what differentiates Perplexity from the other smart software, question-answering marvels. Here’s a list:
- Speed and Precision at Its Core
- Specialized Search Experience for Enterprise Needs
- Tailored Results and User Interaction
- Innovations in Data Privacy
- Ad-Free Experience: A Breath of Fresh Air
- Standardized Interface and High Accuracy
- The Potential to Revolutionize Search
In my experience, I am not sure about the speed of Perplexity or any smart search and retrieval system. Speed must be compared to something. I can obtain results from my installation of Everything search pretty darned quick. None of the cloud search solutions comes close. My Mistal installation grunts and sweats on a corpus of 550 patent documents. How about some benchmarks, WebProNews?
Precision means that the query returns documents matching a query. There is a formula (which is okay as formulae go) which is, as I recall, Relevant retrieved instances divided by All retrieved instances. To calculate this, one must take a bounded corpus, run queries, and develop an understanding of what is in the corpus by reading documents and comparing outputs from test queries. Then one uses another system and repeats the queries, comparing the results. The process can be embellished, particularly by graduate students working on an advanced degree. But something more than generalizations are needed to convince me of anything related to “precision.” Determining precision is impossible when vendors do not disclose sources and make the data sets available. Subjective impressions are okay for messy water lilies, but in the dinobaby world of precision and its sidekick recall, a bit of work is necessary.
The “specialized search experience” means what? To me, I like to think about computational chemists. The interface has to support chemical structures, weird CAS registry numbers, words (mostly ones unknown to a normal human), and other assorted identifiers. As far as I know, none of the smart software I have examined does this for computational chemists or most of the other “specialized” experiences engineers, mathematicians, or physicists, among others, use in their routine work processes. I simply don’t know what Web Pro News wants me to understand. I am baffled, a normal condition for dinobabies.
I like the idea of tailored results. That’s what Instagram, TikTok, and YouTube try to deliver in order to increase stickiness. I think in terms of citations to relevant documents relevant to my query. I don’t like smart software which tries to predict what I want or need. I determine that based on the information I obtain, read, and write down in a notebook. Web Pro News and I are not on the same page in my paper notebook. Dinobabies are a pain, aren’t they?
I like the idea of “data privacy.” However, I need evidence that Perplexity’s innovations actually work. No data, no trust: Is that difficult for a younger person to understand?
The standardized interface makes life easy for the vendor. Think about the computational chemist. The interface must match her specific work processes. A standard interface is likely to be wide of the mark for some enterprise professionals. The phrase “high accuracy” means nothing without one’s knowing the corpus from which the index is constructed. Furthermore the notion of probability means “close enough for horseshoes.” Hallucination refers to outputs from smart software which are wide of the mark. More insidious are errors which cannot be easily identified. A standard interface and accuracy don’t go together like peanut butter and jelly or bread and butter. The interface is separate from the underlying system. The interface might be “accurate” if the term were defined in the write up, but it is not. Therefore, accuracy is like “love,” “mom,” and “ethics.” Anything goes just not for me, however.
The “potential to revolutionize search” is marketing baloney. Search today is more problematic than anytime in my more than half century of work in information retrieval. The only thing “revolutionary” are the ways to monetize users’ belief that the outputs are better, faster, cheaper than other available options. When one thinks about better, faster, and cheaper, I must add the caveat to pick two.
What’s the conclusion to this content marketing essay? Here it is:
As we move further into the digital age, the way we search for information is changing. Perplexity AI represents a significant step forward, offering a faster, more accurate, and more user-centric alternative to traditional search engines like Google. With its advanced AI technologies, ad-free experience, and commitment to data privacy, Perplexity AI is well-positioned to lead the next wave of innovation in search. For enterprise users, in particular, the benefits of Perplexity AI are clear. The platform’s ability to deliver precise, context-aware insights makes it an invaluable tool for research-intensive tasks, while its user-friendly interface and robust privacy measures ensure a seamless and secure search experience. As more organizations recognize the potential of Perplexity AI, we may well see a shift away from Google and towards a new era of search, one that prioritizes speed, precision, and user satisfaction above all else.
I know one thing the stakeholders and backers of the smart software hope that one of the AI players generates tons of cash and dump trucks of profit sharing checks. That day is, I think, lies in the future. Perplexity hopes it will be the winner; hence, content marketing is money well spent. If I were not a dinobaby, I might be excited. So far I am just perplexed.
Stephen E Arnold, September 10, 2024