AI and Job Wage Friction

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read again “The Jobs Being Replaced by AI – An Analysis of 5M Freelancing Jobs,” published in February 2024 by Bloomberg (the outfit interested in fiddled firmware on motherboards). The main idea in the report is that AI boosted a number of freelance jobs. What are the jobs where AI has not (as yet) added friction to the money making process. Here’s the list of jobs NOT impeded by smart software:

Accounting

Backend development

Graphics design

Market research

Sales

Video editing and production

Web design

Web development

Other sources suggest that “Accounting” may be targeted by an AI-powered efficiency expert. I want to watch how this profession navigates the smart software in what is often a repetitive series of eye glazing steps.

image

Thanks, MSFT Copilot. How are doing doing with your reorganization? Running smoothly? Yeah. Smoothly.

Now to the meat of the report: What professions or jobs were the MOST affected by AI. From the cited write up, these are:

Customer service (the exciting, long suffering discipline of chatbots)

Social media marketing

Translation

Writing

The write up includes another telling chunk of data. AI has apparently had an impact on the amount of money some customers were willing to pay freelancers or gig workers. The jobs finding greater billing friction are:

Backend development

Market research

Sales

Translation

Video editing and production

Web development

Writing

The article contains quite a bit of related information. Please, consult the original for a number of almost unreadable graphics and tabular data. I do want to offer several observations:

  1. One consequence of AI, if the data in this report are close enough for horseshoes, is that smart software drives down what customers will pay for a wide range of human centric services. You don’t lose your job; you just get a taste of Victorian sweat shop management thinking
  2. Once smart software is perceived as reasonably capable, demand and pay for good enough translation, smart software is embraced. My view is that translation services are likely to be a harbinger of how AI will affect other jobs. AI does not have to be great; it just has to be perceived as okay. Then. Bang. Hasta la vista human translators except for certain specialized functions.
  3. Data like the information in the Bloomberg article provide a handy road map for AI developers. The jobs least affected by AI become targets for entrepreneurs who find that low-hanging fruit like translation have been picked. (Accountants, I surmise, should not relax to much.)

Net net: The wage suppression angle and the incremental adoption of AI followed by quick adoption are important ideas to consider when analyzing the economic ripples of AI.

Stephen E Arnold, April 1, 2024

AI and Jobs: Under Estimating Perhaps?

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am interested in the impact of smart software on jobs. I spotted “1.5M UK Jobs Now at Risk from AI, Report Finds.” But the snappier assertion appears in the subtitle to the write up:

The number could rise to 7.9M in the future

The UK has about 68 million people (maybe more, maybe fewer but close enough). The estimate of 7.9 million job losses translates to seven million people out of work. Now these types of “future impact” estimates are diaphanous. But the message seems clear. Despite the nascent stage of smart software’s development, the number one use may be dumping humans and learning to love software. Will the software make today’s systems work more efficiently. In my experience, computerizing processes does very little to improve the outputs. Some tasks are completed quickly. However, get the process wrong, and one has a darned interesting project for a blue-chip consulting firm.

image

The smart software is alone in an empty office building. Does the smart software look lonely or unhappy? Thanks, MSFT Copilot. Good enough illustration.

The write up notes:

Back-office, entry-level, and part-time jobs are the ones mostly exposed, with employees on medium and low wages being at the greatest risk.

If this statement is accurate, life will be exciting for parents whose progeny camp out in the family room or who turn to other, possibly less socially acceptable, methods of generating cash. Crime comes to my mind, but you may see volunteers working to pick up trash in lovely Plymouth or Blackpool.

The write up notes:

Experts have argued that AI can be a force for good in the labor market — as long as it goes hand in hand with rebuilding workforce skills.

Academics, wizards, elected officials, consultants can find the silver lining in the cloud that spawned the tornado.

Several observations, if I may:

  1. The acceleration of tools to add AI to processes is evident in the continuous stream of “new” projects appearing in GitHub, Product Watch, and AI newsletters. The availability of tools means that applications will flow into job-reducing opportunities; that is, outfits which will pay cash to cut payroll.
  2. AI functions are now being embedded in mobile devices. Smart software will be a crutch and most users will not realize that their own skills are being transformed. Welcoming AI is an important first step in using AI to replace an expensive, unreliable humanoid.
  3. The floundering of government and non-governmental organizations is amusing to watch. Each day documents about managing the AI “risk” appear in my feedreader. Yet zero meaningful action is taking place as certain large companies work to consolidate their control of essential and mostly proprietary technologies and know how.

Net net: The job loss estimate is interesting. My hunch is that it underestimates the impact of smart software on traditional work. This is good for smart software and possibly not so good for humanoids.

Stephen E Arnold, March 28, 2024

IBM and AI: A Spur to Other Ageing Companies?

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love IBM. Well, I used to. Years ago I had three IBM PC 704 servers. Each was equipped with its expansion SCSI storage device. My love disappeared as we worked daily to keep the estimable ServeRAID softwware in tip top shape. For those unfamiliar with the thrill of ServeRAID, “tip top” means preventing the outstanding code from trashing data.

image

IBM is a winner. Thanks, MSFT Copilot. How are those server vulnerabilities today?

I was, therefore, not surprised to read “IBM Stock Nears an All-Time High—And It May Have Something to Do with its CEO Replacing As Many Workers with AI As Possible.” Instead of creating the first and best example of dinobaby substitution, Big Blue is now using smart software to reduce headcount. The write up says:

[IBM] used AI to reduce the number of employees working on relatively manual HR-related work to about 50 from 700 previously, which allowed them to focus on other things, he [Big Dog at IBM] wrote in an April commentary piece for Fortune. And in its January fourth quarter earnings, the company said it would cut costs in 2024 by $3 billion, up from $2 billion previously, in part by laying off thousands of workers—some of which it later chalked up to AI influence.

Is this development important? Yep. Here are the reasons:

  1. Despite its interesting track record in smart software, IBM has figured out it can add sizzle to the ageing giant by using smart software to reduce costs. Forget that cancer curing stuff. Go with straight humanoid replacement.
  2. The company has significant influence. Some Gen Y and Gen Z wizards don’t think about IBM. That’s fine, but banks, government agencies, Fortune 1000 firms, and family fund management firms do. What IBM does influences these bright entities’ thinking.
  3. The targeted workers are what one might call “expendable.” That’s a great way to motivate some of Big Blue’s war horses.

Net net: The future of AI is coming into focus for some outfits who may have a touch of arthritis.

Stephen E Arnold, March 27, 2024

Commercial Open Source: Fantastic Pipe Dream or Revenue Pipe Line?

March 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Open source is a term which strikes me as au courant. Artificial intelligence software is often described as “open source.” The idea has a bit of “do good” mixed with the idea that commercial software puts customers in handcuffs. (I think I hear Kumbaya playing faintly in the background.) Is it possible to blend the idea of free and open software with the principles of commercial software lock in? Notable open source entrepreneurs have become difficult to differentiate from a run-of-the-mill technology company. Examples include RedHat, Elastic, and OpenAI. Ooops. Sorry. OpenAI is a different type of company. I think.

image

Will open source software, particularly open source AI components, end up like this private playground? Thanks, MSFT Copilot. You are into open source, aren’t you? I hope your commitment is stronger than for server and cloud security.

I had these open source thoughts when I read “AI and Data Infrastructure Drives Demand for Open Source Startups.” The source of the information is Runa Capital, now located in Luxembourg. The firm publishes a report called the Runa Open Source Start Up Index, and it is a “rosy” document. The point of the article is that Runa sees open source as a financial opportunity. You can start your exploration of the tables and charts at this link on the Runa Capital Web site.

I want to focus on some information tucked into the article, just not presented in bold face or with a snappy chart. Here’s the passage I noted:

Defining what constitutes “open source” has its own inherent challenges too, as there is a spectrum of how “open source” a startup is — some are more akin to “open core,” where most of their major features are locked behind a premium paywall, and some have licenses which are more restrictive than others. So for this, the curators at Runa decided that the startup must simply have a product that is “reasonably connected to its open-source repositories,” which obviously involves a degree of subjectivity when deciding which ones make the cut.

The word “reasonably” invokes an image of lawyers negotiating on behalf of their clients. Nothing is quite so far from the kumbaya of the “real” open source software initiative as lawyers. Just look at the licenses for open source software.

I also noted this statement:

Thus, according to Runa’s methodology, it uses what it calls the “commercial perception of open-source” for its report, rather than the actual license the company attaches to its project.

What is “open source”? My hunch it is whatever the lawyers and courts conclude.

Why is this important?

The talk about “open source” is relevant to the “next big thing” in technology. And what is that? ANSWER: A fresh set of money making plays.

I know that there are true believers in open source. I wish them financial and kumbaya-type success.

My take is different: Open source, as the term is used today, is one of the phrases repurposed to breathe life in what some critics call a techno-feudal world. I don’t have a dog in the race. I don’t want a dog in any race. I am a dinobaby. I find amusement in how language becomes the Teflon on which money (one hopes) glides effortlessly.

And the kumbaya? Hmm.

Stephen E Arnold, March 26, 2024

AI Job Lawnmowers: Will Your Blooms Be Chopped Off and Put a Rat King in Your Future?

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love “you will lose your job to AI” articles. I spotted an interesting one titled “The Job Sectors That Will Be Most Disrupted By AI, Ranked.” This is not so much an article as a billboard for an outfit named Voronoi, “where data tells the story.” That’s interesting because there is no data, no methodology, and no indication of the confidence level for each “nuked job.” Nevertheless, we have a ranking.

image

Thanks, MSFT Copilot. Will you be sparking human rat kings? I would wager that you will.

As I understand the analysis of 19,000 tasks, here’s that the most likely to be chopped down and converted to AI silage will be:

IT  / programmers: 73 percent of the job will experience a large impact

Finance / bean counters: 70 percent of the jobs will experience a large impact

Customer sales: 67 percent of the job will experience a large impact

Operations (well, that’s a fuzzy category, isn’t it?): 65 percent of the job will experience a large impact

Personnel / HR: 57 percent of the job will experience a large impact

Marketing: 56 percent of the job will experience a large impact

Legal eagles: 46 percent of the job will experience a large impact

Supply chain (another fuzzy wuzzy bucket): 43 percent of the job will experience a large impact

The kicker in the data is that the numbers date from September 2023. Six months in the faerie land of smart software is a long, long time. Let’s assume that the data meet 2024’s gold standard.

Technology, finance, sales, marketing, and lawyering may shatter the future of employees of less value in terms of compensation, cost to the organization, or whatever management legerdemain the top dogs and their consultants whip up. Imagine eliminate the overhead for humans like office space, health care, retirement baloney, and vacations makes smart software into an attractive “play.”

And what about the fuzzy buckets? My thought is that many people will be trimmed because a chatbot can close a sale for a product without the hassle which humans drag into the office; for example, sexual harassment, mental, drug, and alcohol “issues,” and the unfortunate workplace shooting. I think that a person sitting in a field office to troubleshoot issues related to a state or county contract might fall into the “operations” category even though the employee sees the job as something smart software cannot perform. Ho  ho ho.

Several observations:

  • A trivial cost analysis of human versus software over a five-year period means humans lose
  • AI systems, which may suck initially, will be improved over time. These initial failures may cause the once alert to replacement employee into a false sense of security
  • Once displaced, former employees will have to scramble to produce cash. With lots of individuals chasing available work and money plays, life is unlikely to revert back to the good old days of the Organization Man. (The world will be Organization AI. No suit and white shirt required.)

Net net: I am glad I am old and not quite as enthralled by efficiency.

Stephen E Arnold, March 25, 2024

AI Innovation: Do Just Big Dogs Get the Fat, Farmed Salmon?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Let’s talk about statements like “AI will be open source” and “AI has spawned hundreds, if not thousands, of companies.” Those are assertions which seem to be slightly different from what’s unfolding at some of the largest technology outfits in the world. The circling and sniffing allegedly underway between the Apple and the Google pack is interesting. Apple and Google have a relationship, probably one that will need marriage counselor, but it is a relationship.

image

The wizard scientists have created an interesting digital construct. Thanks, MSFT Copilot. How are you coming along with your Windows 11 updates and Azure security today? Oh, that’s too bad.

The news, however, is that Microsoft is demonstrating that it wants to eat the fattest salmon in the AI stream. Microsoft has a deal of some type with OpenAI, operating under the steady hand of Sam AI-Man. Plus the Softies have cozied up to the French outfit Mistral. Today at 530 am US Eastern I learned that Microsoft has embraced an outstanding thinker, sensitive manager, and pretty much the entire Inflection AI outfit.

The number of stories about this move reflect the interest in smart software and what may be one of world’s purveyor of software which attracts bad actors from around the world. Thinking about breaches in the new Microsoft world is not a topic in the write ups about this deal. Why? I think the management move has captured attention because it is surprising, disruptive, and big in terms of money and implications.

Microsoft Hires DeepMind Co-Founder Suleyman to Run Consumer AI” states:

DeepMind workers complained about his [former Googler Mustafa Suleyman and subsequent Inflection.ai senior manager] management style, the Financial Times reported. Addressing the complaints at the time, Suleyman said: “I really screwed up. I was very demanding and pretty relentless.” He added that he set “pretty unreasonable expectations” that led to “a very rough environment for some people. I remain very sorry about the impact that caused people and the hurt that people felt there.” Suleyman was placed on leave in 2019 and months later moved to Google, where he led AI product management until exiting in 2022.

Okay, a sensitive manager learns from his mistakes joins Microsoft.

And Microsoft demonstrates that the AI opportunity is wide open. “Why Microsoft’s Surprise Deal with $4 Billion Startup Inflection Is the Most Important Non-Acquisition in AI” states:

Even since OpenAI launched ChatGPT in November 2022, the tech world has been experiencing a collective mania for AI chatbots, pouring billions of dollars into all manner of bots with friendly names (there’s Claude, Rufus, Poe, and Grok — there’s event a chatbot name generator). In January, OpenAI launched a GPT store that’s chock full of bots. But how much differentiation and value can these bots really provide? The general concept of chatbots and copilots is probably not going away, but the demise of Pi may signal that reality is crashing into the exuberant enthusiasm that gave birth to a countless chatbots.

Several questions will be answered in the weeks ahead:

  1. What will regulators in the EU and US do about the deal when its moving parts become known?
  2. How will the kumbaya evolve when Microsoft senior managers, its AI partners, and reassigned Microsoft employees have their first all-hands Teams or off-site meeting?
  3. Does Microsoft senior management have the capability of addressing the attack surface of the new technologies and the existing Microsoft software?
  4. What happens to the AI ecosystem which depends on open source software related to AI if Microsoft shifts into “commercial proprietary” to hit revenue targets?
  5. With multiple AI systems, how are Microsoft Certified Professional agents going to [a] figure out what broke and [b] how to fix it?
  6. With AI the apparent “next big thing,” how will adversaries like nations not pals with the US respond?

Net net: How unstable is the AI ecosystem? Let’s ask IBM Watson because its output is going to be as useful as any other in my opinion. My hunch is that the big dogs will eat the fat, farmed salmon. Who will pull that lucious fish from the big dog’s maw? Not me.

Stephen E Arnold, March 20, 2024

Old Code, New Code: Can You Make It Work Again… Sort Of?

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).

image

The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.

I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?

The article says:

it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.

No kidding.

In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.

The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.

The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:

“Refactor if things are getting too complicated, but  stop if can’t prove it works.

Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.

Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.

Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.

These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.

Are you ready? Let’s go:

  1. Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
  2. The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
  3. The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.

Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.

Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”

Stephen E Arnold, March 18, 2024

Thomson Reuters Is Going to Do AI: Run Faster

March 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Thomson Reuters, a mostly low profile outfit, is going to do AI. Why’s this interesting to law schools, lawyers, accountants, special librarians, libraries, and others who “pay” for “real” information? There are three reasons:

  1. Money
  2. Markets
  3. Mania.

Thomson Reuters has been a tech talker for decades. The company created skunk works. It hired quirky MIT wizards. I bought businesses with information technology. But underneath the professional publishing clear coat, the firm is the creation of Lord Thomson of Fleet. The firm has a track record of being able to turn a profit on its $7 billion in revenues. But the future, if news reports are accurate, is artificial intelligence or smart software.

image

The young publishing executive says, “I have go to get ahead of this AI bus before it runs over me.” Thanks, MSFT Copilot. Working on security today?

But wait! What makes Thomson Reuters different from the New York Times or (heaven forbid the question) Rupert Murdoch’s confections? The answer is in my opinion: Thomson Reuters does the trust thing and is a professional publisher. I don’t want to explain that in the world of Lord Thomson of Fleet that publishing is publishing. Nope. Not going there. Thomson Reuters is a custom made billiard cue, not one of those bar pool cheapos.

As appropriate to today’s Thomson Reuters, the news appeared in Thomson’s own news releases first; for example, “Thomson Reuters Profit Beats Estimates Amid AI Push.” Yep, AI drives profits. That’s the “m” in money. Plus, Thomson late last year this article found its way to the law firm market (yep, that’s the second “m”): “Morgan Lewis and Thomson Reuters Enter into Partnership to Put Law Firms’ Needs at the Heart of AI Development.

Now the third “m” or mania. Here’s a representative story, “Thomson Reuters to Invest US$8 billion in a Substantial AI-Focused Spending Initiative.” You can also check out the Financial Times’s report at this link.

Thomson Reuters is a $7 billion corporation. If the $8 billion number is on the money, the venerable news outfit is going to spend the equivalent on one year’s revenue acquiring and investing in smart software. In terms of professional publishing, this chunk of change is roughly the equivalent of Sam AI-Man’s need for trillions of dollars for his smart software business.

Several thoughts struck me as I was reading about the $8 billion investment in smart software:

  1. In terms of publishing or more narrowly professional publishing, $8 billion will take some time to spend. But time is not on the side of publishing decision making processes. When the check is written for an AI investment, there may be some who ask, “Is this the correct investment? After all, aren’t we professional publishers serving lawyers, accountants, and researchers?”
  2. The US legal processes are interesting. But the minor challenge of Crown copyright adds a bit of spice to certain investments. The UK government itself is reluctant to push into some AI areas due to concerns that certain information may not be available unless the red tape about copyright has been trimmed, rolled, and put on the shelf. Without being disrespectful, Thomson Reuters could find that some of the $8 billion headed into its clients pockets as legal challenges make their way through courts in Britain, Canada, and the US and probably some frisky EU states.
  3. The game for AI seems to be breaking into two what a former Greek minister calls the techno feudal set up. On one hand, there are giant technology centric companies (of which Thomson Reuters is not one of the club members). These are Google- and Microsoft-scale outfits with infrastructure, data, customers, and multiple business models. On the other hand, there are the Product Watch outfits which are using open source and APIs to create “new” and “important” AI businesses, applications, and solutions. In short, there are some barons and a whole grab-bag of lesser folk. Is Thomson Reuters going to be able to run with the barons. Remember, please, the barons are riding stallions. Thomson Reuter-type firms either walk or ride donkeys.

Net net: If Thomson Reuters spends $8 billion on smart software, how many lawyers, accountants, and researchers will be put out of work? The risks are not just bad AI investments. The threat maybe to gut the billing power of the paying customers for Thomson Reuters’ content. This will be entertaining to watch.

PS. The third “m”? It is mania, AI mania.

Stephen E Arnold, March 11, 2024

x

x

x

x

x

The Internet as a Library and Archive? Ho Ho Ho

March 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I know that I find certain Internet-related items a knee slapper. Here’s an example: “Millions of Research Papers at Risk of Disappearing from the Internet.” The number of individuals — young at heart and allegedly-informed seniors — think the “Internet” is a library or better yet an archive like the Library of Congress’ collection of “every” book.

image

A person deleting data with some degree of fierceness. Yep, thanks MSFT Copilot. After three tries, this is the best of the lot for a prompt asking for an illustration of data being deleted from a personal computer. Not even good enough but I like the weird orange coloration.

Here are some basics of how “Internet” services work:

  1. Every year costs go up of storage for old and usually never or rarely accessed data. A bean counter calls a meeting and asks, “Do we need to keep paying for ping, power, and pipes?” Some one points out, “Usage of X percent of the data described as “old” is 0.0003 percent or whatever number the bright young sprout has guess-timated. The decision is, as you might guess, dump the old files and reduce other costs immediately.
  2. Doing “data” or “online” is expensive, and the costs associated with each are very difficult, if not impossible to control. Neither government agencies, non-governmental outfits, the United Nations, a library in Cleveland or the estimable Harvard University have sufficient money to make available or keep at hand information. Thus, stuff disappears.
  3. Well-intentioned outfits like the Internet Archive or Project Gutenberg are in the same accountant ink pot. Not every Web site is indexed and archived comprehensively. Not every book that can be digitized and converted to a format someone thinks will be “forever.” As a result, one has a better chance of discovering new information browsing through donated manuscripts at the Vatican Library than running an online query.
  4. If something unique is online “somewhere,” that item may be unfindable. Hey, what about Duke University’s collection of “old” books from the 17th century? Who knew?
  5. Will a government agency archive digital content in a comprehensive manner? Nope.

The article about “risks of disappearing” is a hoot. Notice this passage:

“Our entire epistemology of science and research relies on the chain of footnotes,” explains author Martin Eve, a researcher in literature, technology and publishing at Birkbeck, University of London. “If you can’t verify what someone else has said at some other point, you’re just trusting to blind faith for artefacts that you can no longer read yourself.”

I like that word “epistemology.” Just one small problem: Trust. Didn’t the president of Stanford University have an opportunity to find his future elsewhere due to some data wonkery? Google wants to earn trust. Other outfits don’t fool around with trust; these folks gather data, exploit it, and resell it. Archiving and making it findable to a researcher or law enforcement? Not without friction, lots and lots of friction. Why verify? Estimates of non-reproducible research range from 15 percent to 40 percent of scientific, technical, and medical peer reviewed content. Trust? Hello, it’s time to wake up.

Many estimate how much new data are generated each year. I would suggest that data falling off the back end of online systems has been an active process. The first time an accountant hears the IT people say, “We can just roll off the old data and hold storage stable” is right up there with avoiding an IRS audit, finding a life partner, and billing an old person for much more than the accounting work is worth.

After 25 years, there is “risk.” Wow.

Stephen E Arnold, March 8, 2024

ACM: Good Defense or a Business Play?

March 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Professional publishers want to use the trappings of peer review, standards, tradition, and quasi academic hoo-hah to add value to their products; others want a quasi-monopoly. Think public legal filings and stuff in high school chemistry book. The customers of professional publishers are typically not the folks at the pizza joint on River Road in Prospect, Kentucky. The business of professional publishing in an interesting one, but in the wild and crazy world of collapsing next-gen publishing, professional publishing is often ignored. A publisher conference aimed at professional publishers is quite different from the Jazz Age South by Southwest shindig.

image

Yep, free. Thanks, MSFT Copilot. How’s that security today?

But professional publishers have been in the news. Examples include the dust up about academics making up data. The big time president of the much-honored Stanford University took intellectual short cuts and quit late last year. Then there was the some nasty issue about data and bias at the esteemed Harvard University. Plus, a number of bookish types have guess-timated that a hefty percentage of research studies contain made-up data. Hey, you gotta publish to get tenure or get a grant, right?

But there is an intruder in the basement of the professional publishing club. The intruder positions itself in the space between the making up of some data and the professional publishing process. That intruder is ArXiv, an open-access repository of electronic preprints and postprints (known as e-prints) approved for posting after moderation, according to Wikipedia. (Wikipedia is the cancer which killed the old-school encyclopedias.) Plus, there are services which offer access to professional content without paying for the right to host the information. I won’t name these services because I have no desire to have legal eagles circle about my semi-functioning head.

Why do I present this grade-school level history? I read “CACM Is Now Open Access.” Let’s let the Association of Computing Machinery explain its action:

For almost 65 years, the contents of CACM have been exclusively accessible to ACM members and individuals affiliated with institutions that subscribe to either CACM or the ACM Digital Library. In 2020, ACM announced its intention to transition to a fully Open Access publisher within a roughly five-year timeframe (January 2026) under a financially sustainable model. The transition is going well: By the end of 2023, approximately 40% of the ~26,000 articles ACM publishes annually were being published Open Access utilizing the ACM Open model. As ACM has progressed toward this goal, it has increasingly opened large parts of the ACM Digital Library, including more than 100,000 articles published between 1951–2000. It is ACM’s plan to open its entire archive of over 600,000 articles when the transition to full Open Access is complete.

The decision was not an easy one. Money issues rarely are.

I want to step back and look at this interesting change from a different point of view:

  1. Getting a degree today is less of a must have than when I was a wee dinobaby. My parents told me I was going to college. Period. I learned how much effort was required to get my hands on academic journals. I was a master of knowing that Carnegie-Mellon had new but limited bound volumes of certain professional publications. I knew what journals were at the University of Pittsburgh. I used these resources when the Duquesne Library was overrun with the faithful. Now “researchers” can zip online and whip up astonishing results. Google-type researchers prefer the phrase “quantumly supreme results.” This social change is one factor influencing the ACM.
  2. Stabilizing revenue streams means pulling off a magic trick. Sexy conferences and special events complement professional association membership fees. Reducing costs means knocking off the now, very very expensive printing, storing, and shipping of physical journals. The ACM seems to have figured out how to keep the lights on and the computing machine types spending.
  3. ACM members can use ACM content the way they do a pirate library’s or the feel good ArXiv outfit. The move helps neutralize discontent among the membership, and it is good PR.

These points raise a question; to wit: In today’s world how relevant will a professional association and its professional publications be going foreword. The ACM states:

By opening CACM to the world, ACM hopes to increase engagement with the broader computer science community and encourage non-members to discover its rich resources and the benefits of joining the largest professional computer science organization. This move will also benefit CACM authors by expanding their readership to a larger and more diverse audience. Of course, the community’s continued support of ACM through membership and the ACM Open model is essential to keeping ACM and CACM strong, so it is critical that current members continue their membership and authors encourage their institutions to join the ACM Open model to keep this effort sustainable.

Yep, surviving in a world of faux expertise.

Stephen E Arnold, March 8, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta