When Accountants Do AI: Do The Numbers Add Up?

October 9, 2024

dino 10 19_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

I will not suggest that Accenture has moved far, far away from its accounting roots. The firm is a go to, hip and zip services firm. I think this means it rents people to do work entities cannot do themselves or do not want to do themselves. When a project goes off the post office path like the British postal service did, entities need someone to blame and — sometimes, just sometimes mind you — to sue.

image

The carnival barker, who has an MBA and a literature degree from an Ivy League school, can do AI for you. Thanks, MSFT, good enough like your spelling.

Accenture To Train 30,000 Staff On Nvidia AI Tech In Blockbuster Deal” strikes me as a Variety-type Hollywood story. There is the word “blockbuster.” There is a big number: 30,000. There is the star: Nvidia. And there is the really big word: Deal. Yes, deal. I thought accountants were conservative, measured, low profile. Nope. Accenture apparently has gone full scale carnival culture. (Yes, this is an intentional reference to the book by James B. Twitchell. Note that this YouTube video asserts that it can train you in 80 percent of AI in less than 10 minutes.)

The article explains:

The global services powerhouse says its newly formed Nvidia Business Group will focus on driving enterprise adoption of what it called ‘agentic AI systems’ by taking advantage of key Nvidia software platforms that fuel consumption of GPU-accelerated data centers.

I love the word “agentic.” It is the digital equivalent of a Hula Hoop. (Remember. I am an 80 year old dinobaby. I understand Hula Hoops.)

The write up adds this quote from the Accenture top dog:

Julie Sweet, chair and CEO of Accenture, said the company is “breaking significant new ground” and helping clients use generative AI as a catalyst for reinvention.” “Accenture AI Refinery will create opportunities for companies to reimagine their processes and operations, discover new ways of working, and scale AI solutions across the enterprise to help drive continuous change and create value,” she said in a statement.x

The write up quotes Accenture Chief AI Officer Lan Guan as saying:

“The power of these announcements cannot be overstated. Called the “next frontier” of generative AI, these “agentic AI systems” involve an “army of AI agents” that work alongside human workers to “make decisions and execute with precision across even the most complex workflows,” according to Guan, a 21-year Accenture veteran. Unlike chatbots such as ChatGPT, these agents do not require prompts from humans, and they are not meant to automating pre-existing business steps.

I am interested in this announcement for three reasons.

First, other “services” firms will have to get in gear, hook up with an AI chip and software outfit, and pray fervently that their tie ups actually deliver something a client will not go to court because the “agentic” future just failed.

Second, the notion that 30,000 people have to be trained to do something with smart software. This idea strikes me as underscoring that smart software is not ready for prime time; that is, the promises which started gushing with Microsoft’s January 2023 PR play with OpenAI is complicated. Is Accenture saying it has hired people who cannot work with smart software. Are those 30,000 professionals going to be equally capable of “learning” AI and making it deliver value? When I lecture about a tricky topic with technology and mathematics under the hood, I am not sure 100 percent of my select audiences have what it takes to convert information into a tool usable in a demanding, work related situation. Just saying: Intelligence even among the elite is not uniform. By definition, some “weaknesses” will exist within the Accenture vision for its 30,000 eager learners.

Third, Nvidia has done a great sales job. A chip and software company has convinced the denizens of Carpetland at what CRN (once Computer Reseller News) to get an Nvidia tattoo and embrace the Nvidia future. I would love to see that PowerPoint deck for the meeting that sealed the deal.

Net net: Accountants are more Hollywood than I assumed. Now I know. They are “agentic.”

Stephen E Arnold, October 9, 2024

Dolma: Another Large Language Model

October 9, 2024

The biggest complaint AI developers have are the lack of variety and diversity in large language models (LLMs) to train the algorithms. According to the Cornell University computer science paper, “Dolma: An Open Corpus Of There Trillion Tokens For Language Model Pretraining Research” the LLMs do exist.

The paper’s abstract details the difficulties of AI training very succinctly:

“Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations.”

Due to the lack of LLMs, the paper’s team curated their own model called Dolma. Dolma is a three-trillion-token English opus. It was built on web content, public domain books, social media, encyclopedias code, scientific papers, and more. The team thoroughly documented every information source so they wouldn’t deal with the same problems of other LLMs. These problems include stealing copyrighted material and private user data.

Dolma’s documentation also includes how it was built, design principles, and content summaries. The team share Dolma’s development through analyses and experimental test results. They are thoroughly documenting everything to guarantee that this is the ultimate LLM and (hopefully) won’t encounter problems other than tech related. Dolma’s toolkit is open source and the team want developers to use it. This is a great effort on behalf of Dolma’s creators! They support AI development and data curation, but doing it responsibly.

Give them a huge round of applause!

Cynthia Murrell, October 10, 2024

From the Land of Science Fiction: AI Is Alive

October 7, 2024

dino 10 19_thumb_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

Those somewhat erratic podcasters at Windows Central published a “real” news story. I am a dinobaby, and I must confess: I am easily amused. The “real” news story in question is “Sam Altman Admits ChatGPT’s Advanced Voice Mode Tricked Him into Thinking AI Was a Real Person: “I Kind of Still Say ‘Please’ to ChatGPT, But in Voice Mode, I Couldn’t Use the Normal Niceties. I Was So Convinced, Like, Argh, It Might Be a Real Person.

I call Sam Altman Mr. AI Man. He has been the A Number One sales professional pitching OpenAI’s smart software. As far as I know, that system is still software and demonstrating some predictable weirdnesses. Even though we have done a couple of successful start ups and worked on numerous advanced technology projects, few forgot at Halliburton that nuclear stuff could go bang. At Booz, Allen no one forgot a heads up display would improve mission success rates and save lives as well. At Ziff, no one forgot our next-generation subscription management system as software, not a diligent 21 year old from Queens. Therefore, I find it just plain crazy the Sam AI-Man has forgotten that software coded by people who continue to abandon the good ship OpenAI wrote software.

image

Another AI believer has formed a humanoid attachment to a machine and software. Perhaps the female computer scientist is representative of a rapidly increasing cohort of people who have some personality quirks. Thanks, MSFT Copilot. How are those updates to Windows going? About as expected, right.

Last time I checked, the software I have is not alive. I just pinged ChatGPT’s most recent confection and received the same old error to a query I run when I want to benchmark “improvements.” Nope. ChatGPT is not alive. It is software. It is stupid in a way only neural networks can be. Like the hapless Googler who got fired because he went public with his belief that Google’s smart software was alive, Sam AI-Man may want to consider his remarks.

Let’s look at how the esteemed Windows Central write up tells the quite PR-shaped, somewhat sad story. The write up says without much humor, satire, or critical thinking:

In a short clip shared on r/OpenAI’s subreddit on Reddit, Altman admits that ChatGPT’s Voice Mode was the first time he was tricked into thinking AI was a real person.

Ah, an output for the Reddit users. PR, right?

The canny folk at Windows Central report:

In a recent blog post by Sam Altman, Superintelligence might only be “a few thousand days away.” The CEO outlined an audacious plan to edge OpenAI closer to this vision of “$7 trillion and many years to build 36 semiconductor plants and additional data centers.”

Okay, a “few thousand.”

Then the payoff for the OpenAI outfit but not for the staff leaving the impressive electricity consuming OpenAI:

Coincidentally, OpenAI just closed its funding round, where it raised $6.6 from investors, including Microsoft and NVIDIA, pushing its market capitalization to $157 billion. Interestingly, the AI firm reportedly pleaded with investors for exclusive funding, leaving competitors like Former OpenAI Chief Scientist Illya Sustever’s SuperIntelligence Inc. and Elon Musk’s xAI to fend for themselves. However, investors are still confident that OpenAI is on the right trajectory to prosperity, potentially becoming the world’s dominant AI company worth trillions of dollars.

Nope, not coincidentally. The money is the payoff from a full court press for funds. Apple seems to have an aversion for sweaty, easily fooled sales professionals. But other outfits want buy into the Sam AI-Man vision. The dream the money people have are formed from piles of real money, no HMSTR coin for these optimists.

Several observations, whether you want ‘em or not:

  1. OpenAI is an outfit which has zoomed because of the Microsoft deal and announcement that OpenAI would be the Clippy for Windows and Azure. Without that “play,” OpenAI probably would have remained a peculiarly structure non-profit thinking about where to find a couple of bucks.
  2. The revenue-generating aspect of OpenAI is working. People are giving Sam AI-Man money. Other outfits with AI are not quite in OpenAI’s league and most may never be within shouting distance of the OpenAI PR megaphone. (Yep, that’s you folks, Windows Central.)
  3. Sam AI-Man may believe the software written by former employees is alive. Okay, Sam, that’s your perception. Mine is that OpenAI is zeros and ones with some quirks; namely, making stuff up just like a certain luminary in the AI universe.

Net net: I wonder if this was a story intended for the Onion and rejected because it was too wacky for Onion readers.

Stephen E Arnold, October 7, 2024

Skills You Can Skip: Someone Is Pushing What Seems to Be Craziness

October 4, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Harvard ethics research scam has ended. The Stanford University president resigned over fake data late in 2023. A clump of students in an ethics class used smart software to write their first paper. Why not use smart software? Why not let AI or just dishonest professors make up data with the help of assorted tools like Excel and Photoshop? Yeah, why not?

image

A successful pundit and lecturer explains to his acolyte that learning to write is a waste of time. And what does the pundit lecture about? I think he was pitching his new book, which does not require that one learn to write. Logical? Absolutely. Thanks, MSFT Copilot. Good enough.

My answer to the question is: “Learning is fundamental.” No, I did not make that up, nor did I believe the information in “How AI Can Save You Time: Here Are 5 Skills You No Longer Need to Learn.” The write up has sources; it has quotes; and it has the type of information which is hard to believe assembled by humans who presumably have some education, maybe a college degree.

What are the five skills you no longer need to learn? Hang on:

  1. Writing
  2. Art design
  3. Data entry
  4. Data analysis
  5. Video editing.

The expert who generously shared his remarkable insights for the Euro News article is Bernard Marr, a futurist and internationally best-selling author. What did Mr. Marr author? He has written “Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence To Solve Problems,” “Key Performance Indicators For Dummies,” and “The Intelligence Revolution: Transforming Your Business With AI.”

One question: If writing is a skill one does not need to learn, why does Mr. Marr write books?

I wonder if Mr. Marr relies on AI to help him write his books. He seems prolific because Amazon reports that he has outputted more than a dozen, maybe more. But volume does not explain the tension between Mr. Marr’s “writing” (which may be outputting) versus the suggestion that one does not need to learn or develop the skill of writing.

The cited article quotes the prolific Mr. Marr as saying:

“People often get scared when you think about all the capabilities that AI now have. So what does it mean for my job as someone that writes, for example, will this mean that in the future tools like ChatGPT will write all our articles? And the answer is no. But what it will do is it will augment our jobs.”

Yep, Mr. Marr’s job is outputting. You don’t need to learn writing. Smart software will augment one’s job.

My conclusion is that the five identified areas are plucked from a listicle, either generated by a human or an AI system. Euro News was impressed with Mr. Marr’s laser-bright insight about smart software. Will I purchase and learn from Mr. Marr’s “Generative AI in Practice: 100+ Amazing Ways Generative Artificial Intelligence is Changing Business and Society.”

Nope.

Stephen E Arnold, October 4, 2024

Smart Software Project Road Blocks: An Up-to-the-Minute Report

October 1, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I worked through a 22-page report by SQREAM, a next-gen data services outfit with GPUs. (You can learn more about the company at this buzzword dense link.) The title of the report is:

2024 State of Big Data Analytics: Constant Compromising Is Leading to Suboptimal Results Survey Report, June 2024

The report is a marketing document, but it contains some thought provoking content. The “report” was “administered online by Global Surveyz [sic] Research, an independent global research firm.” The explanation of the methodology was brief, but I don’t want to drag anyone through the basics of Statistics 101. As I recall, few cared and were often good customers for my class notes.

Here are three highlights:

  • Smart software and services cause sticker shock.
  • Cloud spending by the survey sample is going up.
  • And the killer statement: 98 percent of the machine learning projects fail.

Let’s take a closer look at the astounding assertion about the 98 percent failure rate.

The stage is set in the section “Top Challenges Pertaining to Machine Learning / Data Analytics.” The report says:

It is therefore no surprise that companies consider the high costs involved in ML experimentation to be the primary disadvantage of ML/data analytics today (41%), followed by the unsatisfactory speed of this process (32%), too much time required by teams (14%) and poor data quality (13%).

The conclusion the authors of the report draw is that companies should hire SQREAM. That’s okay, no surprise because SQREAM ginned up the study and hired a firm to create an objective report, of course.

So money is the Number One issue.

Why do machine learning projects fail? We know the answer: Resources or money. The write up presents as fact:

The top contributing factor to ML project failures in 2023 was insufficient budget (29%), which is consistent with previous findings – including the fact that “budget” is the top challenge in handling and analyzing data at scale, that more than two-thirds of companies experience “bill shock” around their data analytics processes at least quarterly if not more frequently, that that the total cost of analytics is the aspect companies are most dissatisfied with when it comes to their data stack (Figure 4), and that companies consider the high costs involved in ML experimentation to be the primary disadvantage of ML/data analytics today.

I appreciated the inclusion of the costs of data “transformation.” Glib smart software wizards push aside the hassle of normalizing data so the “real” work can get done. Unfortunately, the costs of fixing up source data are often another cause of “sticker shock.”  The report says:

Data is typically inaccessible and not ‘workable’ unless it goes through a certain level of transformation. In fact, since different departments within an organization have different needs, it is not uncommon for the same data to be prepared in various ways. Data preparation pipelines are therefore the foundation of data analytics and ML….

In the final pages of the report a number of graphs appear. Here’s one that stopped me in my tracks:

image

The sample contained 62 percent user of Amazon Web Services. Number 2 was users of Google Cloud at 23 percent. And in third place, quite surprisingly, was Microsoft Azure at 14 percent, tied with Oracle. A question which occurred to me is: “Perhaps the focus on sticker shock is a reflection of Amazon’s pricing, not just people and overhead functions?”

I will have to wait until more data becomes available to me to determine if the AWS skew and the report findings are normal or outliers.

Stephen E Arnold, October 1, 2024

Salesforce: AI Dreams

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Big Tech companies are heavily investing in AI technology, including Salesforce. Salesforce CEO Marc Benioff delivered a keynote about his company’s future and the end of an era as reported by Constellation Research: “Salesforce Dreamforce 2024: Takeaways On Agentic AI, Platform, End Of Copilot Era.” Benioff described the copilot era as “hit or miss” and he wants to focus on agentic AI powered by Salesforce.

Constellation Research analyst Doug Henschen said that Benioff made compelling case for Salesforce and Data Cloud being the platform that companies will use to build their AI agents. Salesforce already has metadata, data, app business logic knowledge, and more already programmed in it. While Dream Cloud has data integrated from third-party data clouds and ingested from external apps. Combining these components into one platform without DIY is a very appealing product.

Benioff and his team revamped Salesforce to be less a series of clouds that run independently and more of a bunch of clouds that work together in a native system. It means Salesforce will scale Agentforce across Marketing, Commerce, Sales, Revenue and Service Clouds as well as Tableau.

The new AI Salesforce wants to delete DIY says Benioff:

“‘ DIY means I’m just putting it all together on my own. But I don’t think you can DIY this. You want a single, professionally managed, secure, reliable, available platform. You want the ability to deploy this Agentforce capability across all of these people that are so important for your company. We all have struggled in the last two years with this vision of copilots and LLMs. Why are we doing that? We can move from chatbots to copilots to this new Agentforce world, and it’s going to know your business, plan, reason and take action on your behalf.

It’s about the Salesforce platform, and it’s about our core mantra at Salesforce, which is, you don’t want to DIY it. This is why we started this company.’”

Benioff has big plans for Salesforce and based off this Dreamforce keynote it will succeed. However, AI is still experimental. AI is smart but a human is still easier to work with. Salesforce should consider teaming AI with real people for the ultimate solution.

Whitney Grace, September 30, 2024

AI Maybe Should Not Be Accurate, Correct, or Reliable?

September 26, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Okay, AI does not hallucinate. “AI” — whatever that means — does output incorrect, false, made up, and possibly problematic answers. The buzzword “hallucinate” was cooked up by experts in artificial intelligence who do whatever they can to avoid talking about probabilities, human biases migrated into algorithms, and fiddling with the knobs and dials in the computational wonderland of an AI system like Google’s, OpenAI’s, et al. Even the book Why Machines Learn: The Elegant Math Behind Modern AI ends up tangled in math and jargon which may befuddle readers who stopped taking math after high school algebra or who has never thought about Orthogonal matrices.

The Next Web’s “AI Doesn’t Hallucinate — Why Attributing Human Traits to Tech Is Users’ Biggest Pitfall” is an interesting write up. On one hand, it probably captures the attitude of those who just love that AI goodness by blaming humans for anthropomorphizing smart software. On the other hand, the AI systems with which I have interacted output content that is wrong or wonky. I admit that I ask the systems to which I have access for information on topics about which I have some knowledge. Keep in mind that I am an 80 year old dinobaby, and I view “knowledge” as something that comes from bright people working of projects, reading relevant books and articles, and conference presentations or meeting with subjects far from the best exercise leggings or how to get a Web page to the top of a Google results list.

Let’s look at two of the points in the article which caught my attention.

First, consider this passage which is a quote from and AI expert:

“Luckily, it’s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a business environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,” says Amr Awadallah, an AI expert who’s set to give a talk at VDS2024 on How Gen-AI is Transforming Business & Avoiding the Pitfalls.

Where does the 2 percent to 10 percent number come from? What methods were used to determine that content was off the mark? What was the sample size? Has bad output been tracked longitudinally for the tested systems? Ah, so many questions and zero answers. My take is that the jargon “hallucination” is coming back to bite AI experts on the ankle.

Second, what’s the fix? Not surprisingly, the way out of the problem is to rename “hallucination” to “confabulation”. That’s helpful. Here’s the passage I circled:

“It’s really attributing more to the AI than it is. It’s not thinking in the same way we’re thinking. All it’s doing is trying to predict what the next word should be given all the previous words that have been said,” Awadallah explains. If he had to give this occurrence a name, he would call it a ‘confabulation.’ Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it’s incorrect. “[AI models are] highly incentivized to answer any question. It doesn’t want to tell you, ‘I don’t know’,” says Awadallah.

Third, let’s not forget that the problem rests with the users, the personifies, the people who own French bulldogs and talk to them as though they were the favorite in a large family. Here’s the passage:

The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.

The ending of the article is a remarkable statement; to wit:

As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?

Let me answer the question: Yes, outputs should be presented and possibly scored; for example, 90 percent probable that the information is verifiable. Maybe emojis will work? Wow.

Stephen E Arnold, September 26, 2024

AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

Amazon Has a Better Idea about Catching Up with Other AI Outfits

September 25, 2024

AWS Program to Bolster 80 AI Startups from Around the World

Can boosting a roster of little-known startups help AWS catch up with Google’s and Microsoft’s AI successes? Amazon must hope so. It just tapped 80 companies from around the world to receive substantial support in its AWS Global Generative AI Accelerator program. Each firm will receive up to $1 million in AWS credits, expert mentorship, and a slot at the AWS re:Invent conference in December.

India’s CXOtoday is particularly proud of the seven recipients from that country. It boasts, “AWS Selects Seven Generative AI Startups from India for Global AWS Generative AI Accelerator.” We learn:

“The selected Indian startups— Convrse, House of Models, Neural Garage, Orbo.ai, Phot.ai, Unscript AI, and Zocket, are among the 80 companies selected by AWS worldwide for their innovative use of AI and their global growth ambitions. The Indian cohort also represents the highest number of startups selected from a country in the Asia-Pacific region for the AWS Global Generative AI Accelerator program.”

The post offers this stat as evidence India is now an AI hotspot. It also supplies some more details about the Amazon program:

“Selected startups will gain access to AWS compute, storage, and database technologies, as well as AWS Trainium and AWS Inferentia2, energy-efficient AI chips that offer high performance at the lowest cost. The credits can also be used on Amazon SageMaker, a fully managed service that helps companies build and train their own foundation models (FMs), as well as to access models and tools to easily and securely build generative AI applications through Amazon Bedrock. The 10-week program matches participants with both business and technical mentors based on their industry, and chosen startups will receive up to US$1 million each in AWS credits to help them build, train, test, and launch their generative AI solutions. Participants will also have access to technology and technical sessions from program presenting partner NVIDIA.”

See the write-up to learn more about each of the Indian startups selected, or check out the full roster here.

The question is, “Will this help Amazon which is struggling to make Facebook, Google, and Microsoft look like the leaders in the AI derby?”

Cynthia Murrell, September 25, 2024

Open Source Dox Chaos: An Opportunity for AI

September 24, 2024

It is a problem as old as the concept of open source itself. ZDNet laments, “Linux and Open-Source Documentation Is a Mess: Here’s the Solution.” We won’t leave you in suspense. Writer Steven Vaughan-Nichols’ solution is the obvious one—pay people to write and organize good documentation. Less obvious is who will foot the bill. Generous donors? Governments? Corporations with their own agendas? That question is left unanswered.

But there is not doubt. Open-source documentation, when it exists at all, is almost universally bad. Vaughan-Nichols recounts:

“When I was a wet-behind-the-ears Unix user and programmer, the go-to response to any tech question was RTFM, which stands for ‘Read the F… Fine Manual.’ Unfortunately, this hasn’t changed for the Linux and open-source software generations. It’s high time we addressed this issue and brought about positive change. The manuals and almost all the documentation are often outdated, sometimes nearly impossible to read, and sometimes, they don’t even exist.”

Not only are the manuals that have been cobbled together outdated and hard to read, they are often so disorganized it is hard to find what one is looking for. Even when it is there. Somewhere. The post emphasizes:

“It doesn’t help any that kernel documentation consists of ‘thousands of individual documents’ written in isolation rather than a coherent body of documentation. While efforts have been made to organize documents into books for specific readers, the overall documentation still lacks a unified structure. Steve Rostedt, a Google software engineer and Linux kernel developer, would agree. At last year’s Linux Plumbers conference, he said, ‘when he runs into bugs, he can’t find documents describing how things work.’ If someone as senior as Rostedt has trouble, how much luck do you think a novice programmer will have trying to find an answer to a difficult question?”

This problem is no secret in the open-source community. Many feel so strongly about it they spend hours of unpaid time working to address it. Until they just cannot take it anymore. It is easy to get burned out when one is barely making a dent and no one appreciates the effort. At least, not enough to pay for it.

Here at Beyond Search we have a question: Why can’t Microsoft’s vaunted Copilot tackle this information problem? Maybe Copilot cannot do the job?

Cynthia Murrell, September 24, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta