Dreaming about Enterprise Search: Hope Springs Eternal…
November 6, 2024
The post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.
Enterprise search is back, baby. The marketing lingo is very year 2003, however. The jargon has been updated, but the story is the same: We can make an organization’s information accessible. Instead of Autonomy’s Neurolinguistic Programming, we have AI. Instead of “just text,” we have video content processed. Instead of filters, we have access to cloud-stored data.
An executive knows he can crack the problem of finding information instantly. The problem is doing it so that the time and cost of data clean up does not cost more than buying the Empire State Building. Thanks, Stable Diffusion. Good enough.
A good example of the current approach to selling the utility of an enterprise search and retrieval system is the article / interview in Betanews called “How AI Is Set to Democratize Information.” I want to be upfront. I am a mostly aligned with the analysis of information and knowledge presented by Taichi Sakaiya. His The Knowledge Value Revolution or a History of the Future has been a useful work for me since the early 1990s. I was in Osaka, Japan, lecturing at the Kansai Institute of Technology when I learned of this work book from my gracious hosts and the Managing Director of Kinokuniya (my sponsor). Devaluing knowledge by regressing to the fat part of a Gaussian distribution is not something about which I am excited.
However, the senior manager of Pyron (Raleigh, North Carolina), an AI-powered information retrieval company, finds the concept in line with what his firm’s technology provides to its customers. The article includes this statement:
The concept of AI as a ‘knowledge cloud’ is directly tied to information access and organizational intelligence. It’s essentially an interconnected network of systems of records forming a centralized repository of insights and lessons learned, accessible to individuals and organizations.
The benefit is, according to the Pyron executive:
By breaking down barriers to knowledge, the AI knowledge cloud could eliminate the need for specialized expertise to interpret complex information, providing instant access to a wide range of topics and fields.
The article introduces a fresh spin on the problems of information in organizations:
Knowledge friction is a pervasive issue in modern enterprises, stemming from the lack of an accessible and unified source of information. Historically, organizations have never had a singular repository for all their knowledge and data, akin to libraries in academic or civic communities. Instead, enterprise knowledge is scattered across numerous platforms and systems — each managed by different vendors, operating in silos.
Pyron opened its doors in 2017. After seven years, the company is presenting a vision of what access to enterprise information could, would, and probably should do.
The reality, based on my experience, is different. I am not talking about Pyron now. I am discussing the re-emergence of enterprise search as the killer application for bolting artificial intelligence to information retrieval. If you are in love with AI systems from oligopolists, you may want to stop scanning this blog post. I do not want to be responsible for a stroke or an esophageal spasm. Here we go:
- Silos of information are an emergent phenomenon. Knowledge has value. Few want to make their information available without some value returning to them. Therefore, one can talk about breaking silos and democratization, but those silos will be erected and protected. Secret skunk works, mislabeled projects, and squirreling away knowledge nuggets for a winter’s day. In the case of Senator Everett Dirksen, the information was used to get certain items prioritized. That’s why there is a building named after him.
- The “value” of information or knowledge depends on another person’s need. A database which contains the antidote to save a child from a household poisoning costs money to access. Why? Desperate people will pay. The “information wants to free” idea is not one that makes sense to those with information and the knowledge to derive value from what another finds inscrutable. I am not sure that “democratizing information” meshes smoothly with my view.
- Enterprise search, with or without, hits some cost and time problems with a small number of what have been problems for more than 50 years. SMART failed, STAIRS III failed, and the hundreds of followers have failed. Content is messy. The idea that one can process text, spreadsheets, Word files, and email is one thing. Doing it without skipping wonky files or the time and cost of repurposing data remains difficult. Chemical companies deal with formulae; nuclear engineering firms deal with records management and mathematics; and consulting companies deal with highly paid people who lock up their information on a personal laptop. Without these little puddles of information, the “answer” or the “search output” will not be just a hallucination. The answer may be dead wrong.
I understand the need to whip up jargon like “democratize information”, “knowledge friction”, and “RAG frameworks”. The problem is that despite the words, delivering accurate, verifiable, timely on-point search results in response to a query is a difficult problem.
Maybe one of the monopolies will crack the problem. But most of output is a glimpse of what may be coming in the future. When will the future arrive? Probably when the next PR or marketing write up about search appears. As I have said numerous times, I find it more difficult to locate the information I need than at any time in my more than half a century in online information retrieval.
What’s easy is recycling marketing literature from companies who were far better at describing a “to be” system, not a “here and now” system.
Stephen E Arnold, November 4, 2024
Twenty Five Percent of How Much, Google?
November 6, 2024
The post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.
I read the encomia to Google’s quarterly report. In a nutshell, everything is coming up roses even the hyperbole. One news hook which has snagged some “real” news professionals is that “more than a quarter of new code at Google is generated by AI.” The exclamation point is implicit. Google’s AI PR is different from some other firms; for example, Samsung blames its financial performance disappointments on some AI. Winners and losers in a game in which some think the oligopolies are automatic winners.
An AI believer sees the future which is arriving “soon, real soon.” Thanks, You.com. Good enough because I don’t have the energy to work around your guard rails.
The question is, “How much code and technical debt does Google have after a quarter century of its court-described monopolistic behavior? Oh, that number is unknown. How many current Google engineers fool around with that legacy code? Oh, that number is unknown and probably for very good reasons. The old crowd of wizards has been hit with retirement, cashing in and cashing out, and “leadership” nervous about fiddling with some processes that are “good enough.” But 25 years. No worries.
The big news is that 25 percent of “new” code is written by smart software and then checked by the current and wizardly professionals. How much “new” code is written each year for the last three years? What percentage of the total Google code base is “new” in the years between 2021 and 2024? My hunch is that “new” is relative. I also surmise that smart software doing 25 percent of the work is one of those PR and Wall Street targeted assertions specifically designed to make the Google stock go up. And it worked.
However, I noted this Washington Post article: “Meet the Super Users Who Tap AI to Get Ahead at Work.” Buried in that write up which ran the mostly rah rah AI “real” news article coincident with Google’s AI spinning quarterly reports one interesting comment:
Adoption of AI at work is still relatively nascent. About 67 percent of workers say they never use AI for their jobs compared to 4 percent who say they use it daily, according to a recent survey by Gallup.
One can interpret this as saying, “Imagine the growth that is coming from reduced costs. Get rid of most coders and just use Google’s and other firms’ smart programming tools.
Another interpretation is, “The actual use is much less robust than the AI hyperbole machine suggests.”
Which is it?
Several observations:
- Many people want AI to pump some life into the economic fuel tank. By golly, AI is going to be the next big thing. I agree, but I think the Gallup data indicates that the go go view is like looking at a field of corn from a crop duster zipping along at 1,000 feet. The perspective from the airplane is different from the person walking amidst the stalks.
- The lack of data behind Google-type assertions about how much machine code is in the Google mix sounds good, but where are the data? Google, aren’t you data driven? So, where’s the back up data for the 25 percent assertion.
- Smart software seems to be something that is expensive, requires dreams of small nuclear reactors next to a data center adjacent a hospital. Yeah, maybe once the impact statements, the nuclear waste, and the skilled worker issues have been addressed. Soon as measured in environmental impact statement time which is different from quarterly report time.
Net net: Google desperately wants to be the winner in smart software. The company is suggesting that if it were broken apart by crazed government officials, smart software would die. Insert the exclamation mark. Maybe two or three. That’s unlikely. The blurring of “as is” with “to be” is interesting and misleading.
Stephen E Arnold, November 6, 2024
Is Telegram Inspiring Microsoft?
November 6, 2024
You’d think the tech industry would be creative and original, but it’s exactly like others: everyone copies each other. The Verge runs down how Microsoft is “inspired” by Telegram in: “Microsoft Teams Is Getting Threads And Combined Chats And Channels.” Microsoft plans to bring threads and combine separate chats and channels in the Teams communications app. These changes won’t happened until 2025. These changes are similar to how Telegram already operates.
Microsoft is updating its UI, because of negative feedback from users. The changes will make Microsoft Teams easier to use and more organized:
“This new UI fixes one of the big reasons Microsoft Teams sucks for messaging, so you no longer have to flick between separate sections to catch up on messages from groups of people or channels. You’ll be able to configure this new section to keep chats and channels separate or enable custom sections where you can group conversations and projects together.”
Team will include more updates, including a favorites section to pin chats and channels, view customizations such as previews, single lists, and time stamps, and highlighting conversations that mention users. Microsoft Teams is actively listening to its end users and making changes to improve their experience. That’s a really good business MO, because many tech companies don’t do that.
It begs the question, however, if Microsoft is copying Telegram’s threaded conversations. Probably. But who is going to complain?
Whitney Grace, November 6, 2024
Hey, US Government, Listen Up. Now!
November 5, 2024
This post is the work of a dinobaby. If there is art, accept the reality of our using smart art generators. We view it as a form of amusement.
Microsoft on the Issues published “AI for Startups.” The write is authored by a dream team of individuals deeply concerned about the welfare of their stakeholders, themselves, and their corporate interests. The sensitivity is on display. Who wrote the 1,400 word essay? Setting aside the lawyers, PR people, and advisors, the authors are:
- Satya Nadella, Chairman and CEO, Microsoft
- Brad Smith, Vice-Chair and President, Microsoft
- Marc Andreessen, Cofounder and General Partner, Andreessen Horowitz
- Ben Horowitz, Cofounder and General Partner, Andreessen Horowitz
Let me highlight a couple of passages from essay (polemic?) which I found interesting.
In the era of trustbusters, some of the captains of industry had firm ideas about the place government professionals should occupy. Look at the railroads. Look at cyber security. Look at the folks living under expressway overpasses. Tumultuous times? That’s on the money. Thanks, MidJourney. A good enough illustration.
Here’s the first snippet:
Artificial intelligence is the most consequential innovation we have seen in a generation, with the transformative power to address society’s most complex problems and create a whole new economy—much like what we saw with the advent of the printing press, electricity, and the internet.
This is a bold statement of the thesis for these intellectual captains of the smart software revolution. I am curious about how one gets from hallucinating software to “the transformative power to address society’s most complex problems and cerate a whole new economy.” Furthermore, is smart software like printing, electricity, and the Internet? A fact or two might be appropriate. Heck, I would be happy with a nifty Excel chart of some supporting data. But why? This is the first sentence, so back off, you ignorant dinobaby.
The second snippet is:
Ensuring that companies large and small have a seat at the table will better serve the public and will accelerate American innovation. We offer the following policy ideas for AI startups so they can thrive, collaborate, and compete.
Ah, companies large and small and a seat at the table, just possibly down the hall from where the real meetings take place behind closed doors. And the hosts of the real meeting? Big companies like us. As the essay says, “that only a Big Tech company with our scope and size can afford, creating a platform that is affordable and easily accessible to everyone, including startups and small firms.”
The policy “opportunity” for AI startups includes many glittering generalities. The one I like is “help people thrive in an AI-enabled world.” Does that mean universal basic income as smart software “enhances” jobs with McKinsey-like efficiency. Hey, it worked for opioids. It will work for AI.
And what’s a policy statement without a variation on “May live in interesting times”? The Microsoft a2z twist is, “We obviously live in a tumultuous time.” That’s why the US Department of Justice, the European Union, and a few other Luddites who don’t grok certain behaviors are interested in the big firms which can do smart software right.
Translation: Get out of our way and leave us alone.
Stephen E Arnold, November 5, 2024
FOGINT: Hong Kong: A Significant Crypto Wiggle
November 5, 2024
Hong Kong is taking steps to secure its place in today’s high-tech landscape. Blockonomi reports, “Hong Kong’s Bold Move to Become Asia’s Crypto Capital.” Tax breaks, regulations, and a shiny new virtual asset index underpin the effort. Meanwhile, the Virtual Asset Trading Platform regime launched last year is chugging right along. We suspect Telegram is likely to be the utility for messaging, sales, and marketing.
Writer Oliver Dale tells us:
“The Hong Kong Exchanges and Clearing Limited (HKEX) announced the launch of a Virtual Asset Index Series, scheduled for November 15, 2024. This new index will provide benchmark pricing for Bitcoin and Ether specifically tailored to Asia-Pacific time zones. The Securities and Futures Commission (SFC) is working to finalize a list of crypto exchanges that will receive full licenses by year-end. Eric Yip, executive director for intermediaries at the SFC, revealed plans to establish a consultation panel by early 2025 to maintain oversight of licensed exchanges. The regulatory framework extends beyond trading platforms. Hong Kong authorities are developing comprehensive guidelines for crypto-focused over-the-counter trading desks and custodians, with implementation expected in the coming year. For stablecoin issuers, new requirements are being introduced. Foreign fiat-referenced stablecoin providers will need to establish physical operations in Hong Kong and maintain reserves in local banks.”
Establishing a physical presence in the city is no small thing. Though Hong Kong is a culturally rich and vibrant city, we hear real estate is at a premium. That is ok, we are sure stablecoin geniuses can afford it.
Hong Kong is also working to bring AI tools to the financial sector, but there it is caught between a rock and a hard place. Though a part of China, the dense and wealthy city operates under a unique “one country, two systems” governance framework. As a result, it has limited access to both western AI platforms, like Chat GPT and Gemini, and services from Chinese firms like Baidu and ByteDance. To bridge the gap, local institutions like The Hong Kong University of Science and Technology are building their own solutions. Officials hope tax incentives will attract professional investment firms to the city.
The stablecoin policies should go into effect by the end of this year, while custodian regulations and consultation on over-the-counter trading are to be established some time in 2025.
Cynthia Murrell, November 5, 2024
How to Cut Podcasts Costs and Hassles: A UK Example
November 5, 2024
Using AI to replicate a particular human is a fraught topic. Of paramount concern is the relentless issue of deepfakes. There are also legal issues of control over one’s likeness, of course, and concerns the technology could put humans out of work. It is against this backdrop, the BBC reports, that “Michael Parkinson’s Son Defends New AI Podcast.” The new podcast uses AI to recreate the late British talk show host, who will soon interview (human) guests. Son Mike acknowledges the concerns, but insists this project is different. Writer Steven McIntosh explains:
“Mike Parkinson said Deep Fusion’s co-creators Ben Field and Jamie Anderson ‘are 100% very ethical in their approach towards it, they are very aware of the legal and ethical issues, and they will not try to pass this off as real’. Recalling how the podcast was developed, Parkinson said: ‘Before he died, we [my father and I] talked about doing a podcast, and unfortunately he passed away before it came true, which is where Deep Fusion came in. ‘I came to them and said, ‘if we wanted to do this podcast with my father talking about his archive, is it possible?’, and they said ‘it’s more than possible, we think we can do something more’. He added his father ‘would have been fascinated’ by the project, although noted the broadcaster himself was a ‘technophobe’. Discussing the new AI version of his father, Parkinson said: ‘It’s extraordinary what they’ve achieved, because I didn’t really think it was going to be as accurate as that.’”
So they have the family’s buy-in, and they are making it very clear the host is remade with algorithms. The show is called “Virtually Parkinson,” after all. But there is still that replacing human talent with AI thing. Deep Fusion’s Anderson notes that, since Parkinson is deceased, he is in no danger of losing work. However, McIntosh counters, any guest that appears on this show may give one fewer interview to a show hosted by a different, living person. Good point.
One thing noteworthy about Deep Fusion’s AI on this project is its ability to not just put words in Parkinson’s mouth, but to predict how he would have actually responded. Assuming that function is accurate, we have a request: Please bring back the objective reporting of Walter Cronkite. This world sorely needs it.
Cynthia Murrell, November 5, 2024
Microsoft 24H2: The Reality Versus Self Awareness
November 4, 2024
Sorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.
I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.
Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.
Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.
According to the write up offers this helpful advice:
We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.
Okay.
Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.
Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.
Why?
Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”
Several observations:
- The update is flawed
- Core functions do not work
- Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
- Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.
Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly. The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.
Stephen E Arnold, November 4, 2024
Will AI Data Scientists Become Street People?
November 4, 2024
Over at HackerNoon, all-around IT guy Dominic Ligot insists data scientists must get on board with AI or be left behind. In “AI Denialism,” he compares data analysts who insist AI can never replace them with 19th century painters who scoffed at photography as an art form. Many of them who specialized in realistic portraits soon found themselves out of work, despite their objections.
Like those painters, Ligot believes, some data scientists are in denial about how well this newfangled technology can do what they do. They hang on to a limited definition of creativity at their peril. In fact, he insists:
“The truth is, AI’s ability to model complex relationships, surface patterns, and even simulate multiple solutions to a problem means it’s already doing much of what data analysts claim as their domain. The fine-grained feature engineering, the subtle interpretations—AI is not just nibbling around the edges; it’s slowly encroaching into the core of what we’ve traditionally defined as ‘analytical creativity.’”
But we are told there is hope for those who are willing to adapt:
“I’m not saying that data scientists or analysts will be replaced overnight. But to assume that AI will never touch their domain simply because it doesn’t fit into an outdated view of what creativity means is shortsighted. This is a transformative era, one that calls for a redefinition of roles, responsibilities, and skill sets. Data analysts and scientists who refuse to keep an open mind risk finding themselves irrelevant in a world that is rapidly shifting beneath their feet. So, let’s not make the same mistake as those painters of the past. Denialism is a luxury we cannot afford.”
Is Ligot right? And, if so, what skill-set changes can preserve data scientists’ careers? That relevant question remains unanswered in this post. (There are good deals on big plastic mugs at Dollar Tree.)
Cynthia Murrell, November 04, 2024
Enter the Dragon: America Is Unhealthy
November 4, 2024
Written by a humanoid dinobaby. No AI except the illustration.
The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.
This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.
Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)
I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:
served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.
I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.
What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:
“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.
Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.
Here’s another statement:
Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.
Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.
Here’s Kai-Fu kung fu move:
He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.
In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.
What’s the correct outcome? Kai-Fu Lee allegedly said:
What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”
That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.
Several observations:
- I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
- Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
- Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?
I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.
Stephen E Arnold, October 30, 2024
Computer Security and Good Enough Methods
November 1, 2024
Written by a humanoid dinobaby. No AI except the illustration.
I read “TikTok Owner Sacks Intern for Sabotaging AI Project.” The BBC report is straight forward; it does not provide much “management” or “risk” commentary. In a nutshell, the allegedly China linked ByteDance hired or utilized an intern. The term “intern” used to mean a student who wanted to get experience. Today, “intern” has a number of meanings. For example, for certain cyber fraud outfits operating in Southeast Asia an “intern” could be:
- A person paid to do work in a special economic zone
- A person coerced into doing work for an organization engaged in cyber fraud
- A person who is indeed a student and wants to get some experience
- An individual kidnapped and forced to perform work; otherwise, bad things can happen in dark rooms.
What’s the BBC say? Here is a snippet:
TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.
The punishment, according to the write up, was “contacting” the intern’s university. End of story.
My take on this incident is a bit different from the BBC’s.
First, how did a company allegedly linked to the Chinese government make a bad hire? If the student was recommended by a university, what mistake did the university and the professors training the young person commit. The idea is to crank out individuals who snap into certain roles. I am not sure the spirit of an American party school is part of the ByteDance and TikTok work culture, but I may be off base.
Second, when a company hires a gig worker or brings an intern into an organization, are today’s managers able to identify potential issues either with an individual’s work or that person’s inner wiring? The fact that an intern was able to fiddle with code indicates a failure of internal checks and balances. The larger question is, “Can organizations trust interns who are operating as insiders, but without the controls an organization should have over individual workers. This gaffe makes clear that modern management methods are not proactive; they are reactive. For that reason, insider threats exist and could do damage. ByteDance, according to the write up, downplayed the harm caused by the intern:
ByteDance also denied reports that the incident caused more than $10m (£7.7m) of damage by disrupting an AI training system made up of thousands of powerful graphics processing units (GPU).
Is this claim credible? Nope. I refer to the information about four companies “downplaying the impact of the SolarWinds hack.” US outfits don’t want to reveal the impact of a cyber issue. Are outfits like ByteDance and TikTok on the up and up about the impact of the intern’s actions.
Third, the larger question becomes, “How does an organization minimize insider threats?” As organizations seek to cut training staff and rely on lower cost labor?” The answer is, in my opinion, clear to me. An organization does what it can and hope for the best.
Like many parts of a life in an informationized world or datasphere in my lingo, the quality of most efforts is good enough. The approach guarantees problems in the future. These are problems which cannot be solved. Management just finds something to occupy its time. The victims are the users, the customers, or the clients.
The world, even when allegedly linked with nation states, is struggling to achieve good enough.
Stephen E Arnold, November 1, 2024

