Teenie Boppers and Smart Software: Yep, Just Have Money

January 23, 2025

dino-orange_thumb_thumbThis blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.

image

Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.

Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.

One of the highlights of the summary was this finding:

Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.

Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?

Another finding catching my eye was:

Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.

I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:

Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.

My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.

I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.

Stephen E Arnold, January 23, 2025

AI Doom: Really Smart Software Is Coming So Start Being Afraid, People

January 20, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

The essay “Prophecies of the Flood” gathers several comments about software that thinks and decides without any humans fiddling around. The “flood” metaphor evokes the streams of money about which money people fantasize. The word “flood” evokes the Hebrew Biblical idea’s presentation of a divinely initiated cataclysm intended to cleanse the Earth of widespread wickedness. Plus, one cannot overlook the image of small towns in North Carolina inundated in mud and debris from a very bad storm.

Screenshot 2025-01-12 055443

When the AI flood strikes as a form of divine retribution, will the modern arc be filled with humans? Nope. The survivors will be those smart agents infused with even smarter software. Tough luck, humanoids. Thanks, OpenAI, I knew you could deliver art that is good enough.

To sum up: A flood is bad news, people.

The essay states:

the researchers and engineers inside AI labs appear genuinely convinced they’re witnessing the emergence of something unprecedented. Their certainty alone wouldn’t matter – except that increasingly public benchmarks and demonstrations are beginning to hint at why they might believe we’re approaching a fundamental shift in AI capabilities. The water, as it were, seems to be rising faster than expected.

The signs of darkness, according to the essay, include:

  • Rising water in the generally predictable technology stream in the park populated with ducks
  • Agents that “do” something for the human user or another smart software system. To humans with MBAs, art history degrees, and programming skills honed at a boot camp, the smart software is magical. Merlin wears a gray T shirt, sneakers, and faded denims
  • Nifty art output in the form of images and — gasp! — videos.

The essay concludes:

The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.

Let’s assume that I buy this analysis and agree with the notion “prepare now.” How realistic is it that the United Nations, a couple of super powers, or a motivated individual can have an impact? Gentle reader, doom sells. Examples include The Big Short: Inside the Doomsday Machine, The Shifts and Shocks: What We’ve Learned – and Have Still to Learn – from the Financial Crisis, and Too Big to Fail: How Wall Street and Washington Fought to Save the Financial System from Crisis – and Themselves, and others, many others.

Have these dissections of problems had a material effect on regulators, elected officials, or the people in the bank down the street from your residence? Answer: Nope.

Several observations:

  1. Technology doom works because innovations have positive and negative impacts. To make technology exciting, no one is exactly sure what the knock on effects will be. Therefore, doom is coming along with the good parts
  2. Taking a contrary point of view creates opportunities to engage with those who want to hear something different. Insecurity is a powerful sales tool.
  3. Sending messages about future impacts pulls clicks. Clicks are important.

Net net: The AI revolution is a trope. Never mind that after decades of researchers’ work, a revolution has arrived. Lionel Messi allegedly said, “It took me 17 years to become an overnight success.” (Mr. Messi is a highly regarded professional soccer player.)

Will the ill-defined technology kill humans? Answer: Who knows. Will humans using ill-defined technology like smart software kill humans? Answer: Absolutely. Can “anyone” or “anything” take an action to prevent AI technology from rippling through society.  Answer: Nope.

Stephen E Arnold, January 20, 2025

Microsoft: Marketing and Semi Functional Technology Amuse Some People

January 17, 2025

dino orange_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

With support for Windows 10 ending, Microsoft is understandably eager for users to upgrade before then. Its latest nudge, however, may not have the desired effect. TechRadar reports, "Microsoft Embarrasses Itself with Windows 10 Pop-Up that Hogs the Desktop Urging an Upgrade to Windows 11—then Promptly Crashes." Oh dear. Writer Darren Allen elaborates:

"In a rather embarrassing turn of events for Microsoft, its latest bid to urge Windows 10 users to upgrade to Windows 11 has gone awry because the pop-up in question crashes. This is a full-screen pop-up, of which we’ve seen very similar (if not identical) variants before, which urges you to ‘Start planning for Windows 10 end of support’ ahead of the End of Life date for the OS which is October 2025. … The fresh wrinkle here, though, as Windows Latest reports, is that this pop-up is crashing for some Windows 10 users, as flagged up by a denizen of X (@ems_konto). So, what we have here is a message urging you to start planning what you’ll do when Windows 10 support runs out, telling you to prepare now, and that the Windows Backup app can help you transfer all your files to a new Windows 11 PC. And then the pop-up window immediately freezes, becomes unresponsive, and informs the user that ‘Reusable UX Interaction Manager is not working’ – whoops."

Not great for a company trying to usher users into the next iteration of its OS. Will users lose confidence and look elsewhere? A few Windows alternatives do exist, though none that are both cheap and easy to migrate to. Will more folks consider the adventure of using Linux? Perhaps a few will ditch the PC altogether for a Mac. We suspect most will just stick with the company they know, embarrassing glitches and all. Inertia, marketing, and good enough engineering are hallmarks of the company absolutely everyone eager to have a digital mom reminding them to do what is right and proper. Does mom follow the rules? Nah, my mother was a PTA cookie hustler. Who knows that one’s mom really did?

Stephen E Arnold, January 17, 2025

Ground Hog Day: Smart Enterprise Search

January 7, 2025

Hopping DinoI am a dinobaby. I also wrote the Enterprise Search Report, 1st, 2nd, and 3rd editions. I wrote The New Landscape of Search. I wrote some other books. The publishers are long gone, and I am mostly forgotten in the world of information retrieval. Read this post, and you will learn why. Oh, no AI helped me out unless I come up with an art idea. I used Stable Diffusion for the rat, er, sorry, ground hog day creature.

I think it was 2002 when the owner of a publishing company asked me if I thought there was an interest in profiles of companies offering “enterprise search solutions.” I vaguely remember the person, and I will leave it up to you to locate a copy of the 400 page books I wrote about enterprise search.

The set up for the book was simple. I identified the companies which seemed to bid on government contracts for search, companies providing search and retrieval to organizations, and outfits which had contacted me to pitch their enterprise search systems before they were exiting stealth mode. By the time the first edition appeared in 2004, the companies in the ESR were flogging their products.

image

The ground hog effect is a version of the Yogi Berra “Déjà vu all over again” thing. Enterprise search is just out of reach now and maybe forever.

The enterprise search market imploded. It was there and then it wasn’t. Can you describe the features and functions of these enterprise search systems from the “golden age” of information retrieval:

  • Innerprise
  • InQuira
  • iPhrase
  • Lextek Onix
  • MondoSearch
  • Speed of Mind
  • Stratify (formerly Purple Yogi)

The end of enterprise search coincided with large commercial enterprises figuring out that “search” in a complex organization was not one thing. The problem remains today. Lawyers in a Fortune 1000 company want one type of search. Marketers want another “flavor” of search. The accountants want a search that retrieves structured and unstructured data plus images of invoices. Chemists want chemical structure search. Senior managers want absolutely zero search of their personal and privileged data unless it is lawyers dealing with litigation. In short, each unit wants a highly particularized search and each user wants access to his or her data. Access controls are essential, and they are a hassle at a time when the notion of an access control list was like learning to bake bread following a recipe in Egyptian hieroglyphics.

These problems exist today and are complicated by podcasts, video, specialized file types for 3D printing, email, encrypted messaging, unencrypted messaging, and social media. No one has cracked the problem of a senior sales person who changes a PowerPoint deck to close a deal. Where is that particular PowerPoint? Few know and the sales person may have deleted the file changed minutes before the face to face pitch. This means that baloney like “all” the information in an organization is searchable is not just stupid; it is impossible.

The key events were the legal and financial hassles over Fast Search & Transfer. Microsoft bought the company in 2008 and that was the end of a reasonably capable technology platform and — believe it or not — a genuine alternative to Google Web search. A number of enterprise search companies sold out because the cost of keeping the technology current and actually running a high-grade sales and marketing program spelled financial doom. Examples include Exalead and Vivisimo, among others. Others just went out of business: Delphes (remember that one?). The kiss of death for the type of enterprise search emphasized in the ESR was the acquisition of Autonomy by Hewlett Packard. There was a roll up play underway by OpenText which has redefined itself as a smart software company with Fulcrum and BRS Search under its wing.

What replaced enterprise search when the dust settled in 2011? From my point of view it was Shay Banon’s Elastic search and retrieval system. One might argue that Lucid Works (né Lucid Imagination) was a player. That’s okay. I am, however, to go with Elastic because it offered a version as open source and a commercial version with options for on-going engineering support. For the commercial alternatives, I would say that Microsoft became the default provider. I don’t think SharePoint search “worked” very well, but it was available. Google’s Search Appliance appeared and disappeared. There was zero upside for the Google with a product that was “inefficient” at making a big profit for the firm. So, Microsoft it was. For some government agencies, there was Oracle.

Oracle acquired Endeca and focused on that computationally wild system’s ability to power eCommerce sites. Oracle paid about $1 billion for a system which used to be an enterprise search with consulting baked in. One could buy enterprise search from Oracle and get structured query language search, what Oracle called “secure enterprise search,” and may a dollop of Triple Hop and some other search systems the company absorbed before the end of the enterprise search era. IBM talked about search but the last time I drove by IBM Government systems in Gaithersburg, Maryland, it like IBM search, had moved on. Yo, Watson.

Why did I make this dalliance on memory lane the boring introduction to a blog post? The answer is that I read “Are LLMs At Risk Of Going The Way Of Search? Expect A Duopoly.” This is a paywalled article, so you will have to pony up cash or go to a library. Here’s an abstract of the write up:

  1. The evolution of LLMs (Large Language Models) will lead users to prefer one or two dominant models, similar to Google’s dominance in search.

  2. Companies like Google and Meta are well-positioned to dominate generative AI due to their financial resources, massive user bases, and extensive data for training.

  3. Enterprise use cases present a significant opportunity for specialized models.

Therefore, consumer search will become a monopoly or duopoly.

Let’s assume the Forbes analysis is accurate. Here’s what I think will happen:

First, the smart software train will slow and a number of repackagers will use what’s good enough; that is, cheap enough and keeps the client happy. Thus, a “golden age” of smart search will appear with outfits like Google, Meta, Microsoft, and a handful of others operating as utilities. The US government may standardize on Microsoft, but it will be partners who make the system meet the quite particular needs of a government entity.

Second, the trajectory of the “golden age” will end as it did for enterprise search. The costs and shortcomings become known. Years will pass, probably a decade, maybe less, until a “new” approach becomes feasible. The news will diffuse and then a seismic event will occur. For AI, it was the 2023 announcement that Microsoft and OpenAI would change how people used Microsoft products and services. This created the Google catch up and PR push. We are in the midst of this at the start of 2025.

Third, some of the problems associated with enterprise information and an employee’s finding exactly what he or she needs will be solved. However, not “all” of the problems will be solved. Why? The nature of information is that it is a bit like pushing mercury around. The task requires fresh thinking.

To sum up, the problem of search is an excellent illustration of the old Hegelian chestnut of Hegelian thesis, antithesis, and synthesis.  This means the problem of search is unlikely to be “solved.” Humans want answers. Some humans want to verify answers which means that the data on the sales person’s laptop must be included. When the detail oriented human learns that the sales person’s data are missing, the end of the “search solution” has begun.

The question “Will one big company dominate?” The answer is, in my opinion, maybe in some use cases. Monopolies seem to be the natural state of social media, online advertising, and certain cloud services. For finding information, I don’t think the smart software will be able to deliver. Examples are likely to include [a] use cases in China and similar countries, [b] big multi-national organizations with information silos, [c] entities involved in two or more classified activities for a government, [d] high risk legal cases, and [e] activities related to innovation, trade secrets, and patents, among others.

The point is that search and retrieval remains an extraordinarily difficult problem to solve in many situations. LLMs contribute some useful functional options, but by themselves, these approaches are unlikely to avoid the reefs which sank the good ships Autonomy and Fast Search & Transfer, and dozens of others competing in the search space.

Maybe Yogi Berra did not say “Déjà vu all over again.” That’s okay. I will say it. Enterprise search is “Déjà vu all over again.”

Stephen E Arnold, January 7, 2025

Why Buzzwords Create Problems. Big Problems, Right, Microsoft?

January 7, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I read an essay by Steven Sinofsky. He worked at Microsoft. You can read about him in Wikipedia because he was a manager possibly associated with Clippy. He wrote an essay called “225. Systems Ideas that Sound Good But Almost Never Work—”Let’s just…” The write up is about “engineering patterns that sound good but almost never work as intended.”

I noticed something interesting about his explanation of why many software solutions go off the rails, fail to work, create security opportunities for bad actors associated with entities not too happy with the United States, and on-going headaches for for hundreds of millions of people.

Here is a partial list of the words and bound phrases from his essay:

Add an API
Anomaly detection
Asynchronous
Cross platform
DSL
Escape to native
Hybrid parallelism
Multi-master writes
Peer to peer
Pluggable
Sync the data

What struck me about this essay is that it reveals something I think is important about Microsoft and probably other firms tapping the expertise of the author; that is, the jargon drives how the software is implemented.

I am not certain that my statement is accurate for software in general. But for this short blog post, let’s assume that it applies to some software (and I am including Microsoft’s own stellar solutions as well as products from other high profile and wildly successful vendors). With the ground rules established, I want to offer several observations about this “jargon drives the software engineering” assertion.

First, the resulting software is flawed. Problems are not actually resolved. The problems are papered over with whatever the trendy buzzword says will work. The approach makes sense because actual problem solving may not be possible within a given time allocation or a working solution may fail which requires figuring out how to not fail again.

Second, the terms reveal that marketing think takes precedence over engineering think. Here’s what the jargon creators do. These sales oriented types grab terms that sound good and refer to an approach. The “team” coalesces around the jargon, and the jargon directs how the software is approached. Does hybrid parallelism “work”? Who knows, but it is the path forward. The manager says, “Let’s go team” and Clippy emerges or the weird opaqueness of the “ribbon.”

Third, the jargon shaped by art history majors and advertising mavens defines the engineering approach. The more successful the technical jargon, the more likely those people who studied Picasso’s colors or Milton’s Paradise Regained define the technical frame in which a “solution” is crafted.

How good is software created in this way? Answer: Good enough.

How reliable is software created in this way? Answer: Who knows until someone like a paying customer actually uses the software.

How secure is the software created in this way? Answer: It is not secure as the breaches of the Department of Treasury, the US telecommunications companies, and the mind boggling number of security lapses in 2024 prove.

Net net: Engineering solutions based on jargon are not intended to deliver excellence. The approach is simply “good enough.” Now we have some evidence that industry leaders realize the fact. Right, Clippy?

Stephen E Arnold, January 8, 2025

Good Enough Junk Food: Knowledge without the Work

December 16, 2024

Hopping Dino_thumb_thumbThis write up emerged from the dinobaby’s own mind. Yes, the dinobaby used AI because this write up is about the knowledge value lost with smart software. Intellectual junk food is not the same as my mother’s overcooked chicken which she killed on her aunt’s farm. Cluck, cluck, squawk.

I plugged the Guardian’s opinion essay into You.com’s Chat GPT-4o model. You can read the original essay titled “Is Doom Scrolling Really Rotting Our Brains? The Evidence Is Getting Harder to Ignore.” The original essay is clear, reasonably well researched, and structured to speak to [a] a person who has access to the Guardian online or in (gasp!) a print form, [b] who reads the Guardian’s opinion essays, and [c] who has the ability to recognize that “brain rot” has probably affected that individual, co-workers, government leaders, children, and people who repair refrigerators.

image

This illustration is good enough and it was produced by the AI system Magic Studio.

Here’s the summary Chat GPT-4o spit out:

The essay critiques the decline of human intellect in the digital age, using the viral “Skibidi Toilet” video as a cultural metaphor for deteriorating focus and intellectual engagement. The Oxford English Dictionary’s 2024 word of the year, “brain rot”, reflects this phenomenon, defined as the mental degradation caused by excessive consumption of trivial online content. The essay draws on academic research to demonstrate how compulsive internet use is literally shrinking grey matter, weakening memory, shortening attention spans, and distorting cognitive functions. Studies from Harvard, Oxford, and King’s College reveal the harmful effects of media multitasking and heavy internet usage, with findings such as reduced grey matter in the brain and the risk of “digital dementia” in young people during critical developmental years.The prevalence of infinite scroll and dopamine-driven design in apps is highlighted as a deliberate tactic by tech companies to exploit human psychology for profit, creating powerful addiction loops. Former Google design ethicist Tristan Harris warns that this “race to the bottom of the brain stem” manipulates fear, outrage, and reward systems, potentially driving society to catastrophe if left unchecked.Despite the grim outlook, the essay finds hope in the growing self-awareness among younger generations, who coined the term “brain rot” and recognize its effects. Movements advocating for reduced technology use, such as switching to dumbphones or promoting smartphone-free childhoods, are gaining traction. These trends suggest the possibility of reclaiming mental clarity and focus, countering the current trajectory of cognitive decline.Ultimately, the essay positions “Skibidi Toilet” as both a symptom of modern intellectual decay and a call to action. Humanity stands at a critical juncture: to either rise above the current trends of distraction and degradation or succumb further to the “U-bend” of intellectual regression.

For what it is worth, this is a typical bite-sized knowledge cookie. No one will know if the summary is good or bad unless that person takes the time to get the original full text and compare it with this AI generated output. The informational fast food provides a sugar jolt from saving time or the summary consumer’s belief that the important information is on the money. A knowledge cookie if you will, or maybe intellectual junk food?

Is this summary good enough? From my point of view, it is just okay; that is, good enough. What else is required? Flash back to 1982, the ABI/INFORM database was a commercial success. A couple of competitors were trying to capture our customers which was tricky. Intermediaries like Dialog Information Services, ESA, LexisNexis (remember Buster and his silver jumpsuit?), among others “owned” the direct relationship with the companies that paid the intermediaries to use the commercial databases on their systems. Then the intermediaries shared some information with us, the database producers.

How did a special librarian or a researcher “find” or “know about” our database? The savvy database producers provided information to the individuals interested in a business and management related commercial database. We participated in niche trade shows. We held training programs and publicized them with our partners Dow Jones News Retrieval, Investext, Predicasts, and Disclosure, among a few others. Our senior professionals gave lectures about controlled term indexing, the value of classification codes, and specific techniques to retrieve a handful of relevant citations and abstracts from our online archive. We issued news releases about new sources of information we added, in most cases with permission of the publisher.

We did not use machine indexing. We did have a wizard who created a couple of automatic indexing systems. However, when the results of what the software in 1922 could do, we fell back on human indexers, many of whom had professional training in the subject matter they were indexing. A good example was our coverage of real estate management activities. The person who handled this content was a lawyer who preferred reading and working in our offices. At this time, the database was owned by the Courier-Journal & Louisville Times Co. The owner of the privately held firm was an early adopted of online and electronic technology. He took considerable pride in our line up of online databases. When he hired me, I recall his telling me, “Make the databases as good as you can.”

How did we create a business and management database that generated millions in revenue and whose index was used by entities like the Royal Bank of Canada to index its internal business information?

Here’s the secret sauce:

  1. We selected sources in most cases business journals, publications, and some other types of business related content; for example, the ANBAR management reports
  2. The selection of which specific article to summarize was the responsibility of a managing editor with deep business knowledge
  3. Once an article was flagged as suitable for ABI/INFORM, it was routed to the specialist who created a summary of the source article. At that time, ABI/INFORM summaries or “abstracts” were limited to 150 words, excluding the metadata.
  4. An indexing specialist would then read the abstract and assign quite specific index terms from our proprietary controlled vocabulary. The indexing included such items as four to six index terms from our controlled vocabulary and a classification code like 7700 to indicate “marketing” with addition two digit indicators to make explicit that the source document was about marketing and direct mail or some similar subcategory of marketing. We also included codes to disambiguate between a railroad terminal and a computer terminal because source documents assumed the reader would “know” the specific field to which the term’s meaning belonged. We added geographic codes, so the person looking for information could locate employee stock ownership in a specific geographic region like Northern California, and a number of other codes specifically designed to allow precise, comprehensive retrieval of abstracts about business and management. Some of the systems permitted free text searching of the abstract, and we considered that a supplement to our quite detailed indexing.
  5. Each abstract and index terms was checked by a control control process using people who had demonstrated their interest in our product and their ability to double check the indexing.
  6. We had proprietary “content management systems” and these generated the specific file formats required by our intermediaries.
  7. Each week we updated our database and we were exploring daily updates for our companion product called Business Dateline when the Courier Journal was broken up and the database operation sold to a movie camera company, Bell+Howell.

Chat GPT-4o created the 300 word summary without the human knowledge, expertise, and effort. Consequently, the loss of these knowledge based workflow has been replaced by a smart software which can produce a summary in less than 30 seconds.

And that summary is, from my point of view, good enough. There are some trade offs:

  1. Chat GPT-4o is reactive. Feed it a url or a text, and it will summarize it. Gone is the knowledge-based approach to select a specific, high-value source document for inclusion in the database. Our focus was informed selection. People paid to access the database because of the informed choice about what to put in the database.
  2. The summary does not include the ABI/INFORM key points and actionable element of the source document. The summary is what a high school or junior college graduate would create if a writing teacher assigned a “how to write a précis” as part of the course requirements. In general, high school and junior college graduates are not into nuance and cannot determine the pivotal information payload in a source document.
  3. The precise indexing and tagging is absent. One could create a 1,000 such summaries, toss them in MISTRAL, and do a search. The result is great if one is uninformed about the importance of editorial polices, knowledge-based workflows, and precise, thorough indexing.

The reason I am sharing some of this “ancient” online history is:

  1. The loss of quality in online information is far more serious than most people understand. Getting a summary today is no big deal. What’s lost is simply not on these individuals’ radar.
  2. The lack of an editorial policy, precise date and time information, and the fine-grained indexing means that one has to wade through a mass of undifferentiated information. ABI/INFORM in the 1080s delivered a handful of citations directly on point with the user’s query. Today no one knows or cares about precision and recall.
  3. It is now more difficult than at any other time in my professional work career to locate needed information. Public libraries do not have the money to obtain reference materials, books, journals, and other content. If the content is online, it is a dumbed down and often cut rate version of the old-fashioned commercial databases created by informed professionals.
  4. People look up information online and remain dumb; that is, the majority of the people with whom I come in contact routinely ask me and my team, “Where do you get your information?” We even have a slide in our CyberSocial lecture about “how” and “where.” The analysts and researchers in the audience usually don’t know so an entire subculture of open source information professionals has come into existence. These people are largely on their own and have to do work which once was a matter of querying a database like ABI/INFORM, Predicasts, Disclosure, Agricola, etc.

Sure the essay is good. The summary is good enough. Where does that leave a person trying to understand the factual and logical errors in a new book examining social media. In my opinion, people are in the dark and have a difficult time finding information. Making decisions in the dark or without on point accurate information is recipe for a really bad batch of cookies.

Stephen E Arnold, December 15, 2024

We Need a Meeting about Meetings after I Get Back from a Meeting

December 10, 2024

animated-dinosaur-image-0049_thumb_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

I heard that Mr. Jeff Bezos, the Big Daddy of online bookstores, likes chaotic and messy meetings. Socrates might not have been down with that approach.

As you know, Socrates was a teacher who ended up dead because he asked annoying questions. “Socratic thinking” helps people remain open to new ideas. Do new ideas emerge from business meetings? Most of those whom I know grumble, pointing out to me that meetings waste their time. Michael Poczwardowski challenges that assumption with Socratic thinking in the Perspectiveship post “Socratic Questioning – ‘Meetings are a waste of time’”.

Socratic-based discussions are led by someone who only asks questions. By asking only questions the discussion can then focus on challenging assumptions, critical thinking, and first principles-dividing problems into basic elements to broaden perspectives and understanding. Poczwardowski brings the idea that: “meetings are a waste of time” to the discussion forum.

Poczwardowski introduces readers to Socratic thinking with the steps of classification, challenge assumptions, look for data/evidence, change perspective, explore consequences and implications, and question the question. Here’s my summary done my a person with an advanced degree in information science. (I know I am not as smart as Google’s AI, but I do what I can with my limited resources, thank you.)

“The key is to remain open to possibilities and be ready to face our beliefs. Socratic questioning is a great way to work on improving our critical thinking.

When following Socratic questioning ask to:

Clarify the idea: It helps us understand what we are talking about and to be on the same page

Challenge assumptions: Ask them to list their assumptions.

Look for evidence: Asking what kind of evidence they have can help them verify the sources of their beliefs

Change perspectives: Look at the problem from others’ points of view.

Explore consequences: Explore the possible outcomes and effects of actions to understand their impact”

Am I the only one who thinks this also sounds obvious? Ancient philosophers did inspire the modern approach to scientific thought. Galileo demonstrated that he would recant instead of going to prison or being killed. Perhaps I should convene a meeting to decide if the meeting is a waste of time. I will get back to you. I have a meeting coming up.

Whitney Grace, December 10, 2024

AI Automation: Spreading Like Covid and Masks Will Not Help

December 10, 2024

animated-dinosaur-image-0049_thumb_thumb_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

Reddit is the one of the last places on the Internet where you can find quality and useful information. Reddit serves as the Internet’s hub for news, tech support, trolls, and real-life perspectives about jobs. Here’s a Reddit downer in the ChatGPT thread for anyone who works in a field that can be automated: “Well this is it boys. I was just informed from my boss and HR that my entire profession is being automated away.”

For brevity’s sake here is the post:

“For context I work production in local news. Recently there’s been developments in AI driven systems that can do 100% of the production side of things which is, direct, audio operate, and graphic operate -all of those jobs are all now gone in one swoop. This has apparently been developed by the company Q ai. For the last decade I’ve worked in local news and have garnered skills I thought I would be able to take with me until my retirement, now at almost 30 years old, all of those job opportunities for me are gone in an instant. The only person that’s keeping their job is my manager, who will overlook the system and do maintenance if needed. That’s 20 jobs lost and 0 gained for our station. We were informed we are going to be the first station to implement this under our company. This means that as of now our entire production staff in our news station is being let go. Once the system is implemented and running smoothly then this system is going to be implemented nationwide (effectively eliminating tens of thousands of jobs.) There are going to be 0 new jobs built off of this AI platform. There are people I work with in their 50’s, single, no college education, no family, and no other place to land a job once this kicks in. I have no idea what’s going to happen to them. This is it guys. This is what our future with AI looks like. This isn’t creating any new jobs this is knocking out entire industry level jobs without replacing them.”

The post is followed by comments of commiseration, encouragement, and the usual doom and gloom. It’s not surprising that local news stations are automating their tasks, especially with the overhead associates with employees. These include: healthcare, retirement package, vacation days, PTO, and more. AI is the perfect employee, because it doesn’t complain or take time off. AI, however, is lacking basic common sense and fact checking. We’re witnessing a change in how the job market, it just sucks to live through it.

Whitney Grace, December 10, 2024

Deepfakes: An Interesting and Possibly Pernicious Arms Race

December 2, 2024

As it turns out, deepfakes are a difficult problem to contain. Who knew? As victims from celebrities to schoolchildren multiply exponentially, USA Today asks, “Can Legislation Combat the Surge of Non-Consensual Deepfake Porn?” Journalist Dana Taylor interviewed UCLA’s John Villasenor on the subject. To us, the answer is simple: Absolutely not. As with any technology, regulation is reactive while bad actors are proactive. Villasenor seems to agree. He states:

“It’s sort of an arms race, and the defense is always sort of a few steps behind the offense, right? In other words that you make a detection tool that, let’s say, is good at detecting today’s deepfakes, but then tomorrow somebody has a new deepfake creation technology that is even better and it can fool the current detection technology. And so then you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.”

Exactly. So if governments are powerless to stop this horror, what can? Perhaps big firms will fight tech with tech. The professor dreams:

“So I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted. Because I think any reputable, for example, social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it.”

Sure. But what can be done while we wait on big tech to solve the problem it unleased? Individual responsibility, baby:

“I certainly think it’s good for everybody, and particularly young people these days to be just really aware of knowing how to use the internet responsibly and being careful about the kinds of images that they share on the internet. … Even images that are sort of maybe not crossing the line into being sort of specifically explicit but are close enough to it that it wouldn’t be as hard to modify being aware of that kind of thing as well.”

Great, thanks. Admitting he may sound naive, Villasenor also envisions education to the (partial) rescue:

“There’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of… disseminating these sorts of videos.”

Our view is that digital tools allow the dark side of individuals to emerge and expand.

Cynthia Murrell, December 2, 2024

AI In Group Communications: The Good and the Bad

November 29, 2024

In theory, AI that can synthesize many voices into one concise, actionable statement is very helpful. In practice, it is complicated. The Tepper School of Business at Carnegie Mellon announces, “New Paper Co-Authored by Tepper School Researchers Articulates How Large Language Models are Changing Collective Intelligence Forever.” Researchers from Tepper and other institutions worked together on the paper, which was published in Nature Human Behavior. We learn:

“[Professor Anita Williams] Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions. While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.”

Indeed. The write-up continues:

“‘Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement,’ said Jason Burton, an assistant professor at Copenhagen Business School. Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data. Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.”

In order to do so, the paper suggests, we must further explore LLMs’ ethical and practical implications. Only then can we craft effective guidelines for responsible AI summarization. Such standards are especially needed, the authors note, for any use of LLMs in policymaking and public discussions.

But not to worry. The big AI firms are all about due diligence, right?

Cynthia Murrell, November 29, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta