Pragmatism or the Normalization of Good Enough
November 14, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.
Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.
I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:
All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.
Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.
I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.
One of the touchstones in the essay, in my opinion, is this statements:
Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.
I urge you to read the complete original essay.
Several observations:
- Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
- The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
- By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.
Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?
Stephen E Arnold, November 14, 2024
Marketers, Deep Fakes Work
November 14, 2024
Bad actors use AI for pranks all the time, but this could be the first time AI pranked an entire Irish town out of its own volition. KXAN reports on the folly: “AI Slop Site Sends Thousands In Ireland To Fake Halloween Parade.” The website MySpiritHalloween.com dubs itself as the ultimate resource for all things Halloween. The website uses AI-generated content and an article told the entire city of Dublin that there would be a parade.
If this was a small Irish village, there would giggles and the police would investigate criminal mischief before they stopped wasting their time. Dublin, however, is one of the country’s biggest cities and folks appeared in the thousands to see the Halloween parade. They eventually figured something was wrong without the presence of barriers, law enforcement, and (most importantly) costume people on floats!
MySpiritHalloween is owned by Naszir Ali and he was embarrassed by the situation.
Per Ali’s explanation, his SEO agency creates websites and ranks them on Google. He says the company hired content writers who were in charge of adding and removing events all across the globe as they learned whether or not they were happening. He said the Dublin event went unreported as fake and that the website quickly corrected the listing to show it had been cancelled….
Ali said that his website was built and helped along by the use of AI but that the technology only accounts for 10-20% of the website’s content. He added that, according to him, AI content won’t completely help a website get ranked on Google’s first page and that the reason so many people saw MySpiritHalloween.com was because it was ranked on Google’s first page — due to what he calls “80% involvement” from actual humans.”
Ali claims his website is based in Illinois, but all investigations found that its hosted in Pakistan. Ali’s website is one among millions that use AI-generated content to manipulate Google’s algorithm. Ali is correct that real humans did make the parade rise to the top of Google’s search results, but he was responsible for the content.
Media around the globe took this as an opportunity to teach the parade goers and others about being aware of AI-generated scams. Marketers, launch your AI infused fakery.
Whitney Grace, November 14, 2024
Smart Software: It May Never Forget
November 13, 2024
A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked the obvious. The abstract explains:
“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”
But AI firms may be fooling themselves about this method. We learn:
“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”
Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:
“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”
See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?
Cynthia Murrell, November 13, 2024
Insider Threats: More Than Threat Reports and Cumbersome Cyber Systems Are Needed
November 13, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
With actionable knowledge becoming increasingly concentrated, is it a surprise that bad actors go where the information is? One would think that organizations with high-value information would be more vigilant when it comes to hiring people from other countries, using faceless gig worker systems, or relying on an AI-infused résumé on LinkedIn. (Yep, that is a Microsoft entity.)
Thanks, OpenAI. Good enough.
The fact is that big technology outfits are supremely confident in their ability to do no wrong. Billions in revenue will boost one’s confidence in a firm’s management acumen. The UK newspaper Telegraph published “Why Chinese Spies Are Sending a Chill Through Silicon Valley.”
The write up says:
In recent years the US government has charged individuals with stealing technology from companies including Tesla, Apple and IBM and seeking to transfer it to China, often successfully. Last year, the intelligence chiefs of the “Five Eyes” nations clubbed together at Stanford University – the cradle of Silicon Valley innovation – to warn technology companies that they are increasingly under threat.
Did the technology outfits get the message?
The Telegram article adds:
Beijing’s mission to acquire cutting edge tech has been given greater urgency by strict US export controls, which have cut off China’s supply of advanced microchips and artificial intelligence systems. Ding, the former Google employee, is accused of stealing blueprints for the company’s AI chips. This has raised suspicions that the technology is being obtained illegally. US officials recently launched an investigation into how advanced chips had made it into a phone manufactured by China’s Huawei, amid concerns it is illegally bypassing a volley of American sanctions. Huawei has denied the claims.
With some non US engineers and professionals having skills needed by some of the high-flying outfits already aloft or working their hangers to launch their breakthrough product or service, US companies go through human resource and interview processes. However, many hires are made because a body is needed, someone knows the candidate, or the applicant is willing to work for less money than an equivalent person with a security clearance, for instance.
The result is that most knowledge centric organizations have zero idea about the security of their information. Remember Edward Snowden? He was visible. Others are not.
Let me share an anecdote without mentioning names or specific countries and companies.
A business colleague hailed from an Asian country. He maintained close ties with his family in his country of origin. He had a couple of cousins who worked in the US. I was at his company which provided computer equipment to the firm at which I was working in Silicon Valley. He explained to me that a certain “new” technology was going to be released later in the year. He gave me an overview of this “secret” project. I asked him where the data originated. He looked at me and said, “My cousin. I even got a demo and saw the prototype.”
I want to point out that this was not a hire. The information flowed along family lines. The sharing of information was okay because of the closeness of the family. I later learned the information was secret. I realized that doing an HR interview process is not going to keep secrets within an organization.
I ask the companies with cyber security software which has an insider threat identification capability, “How do you deal with family or high-school relationship information channels?”
The answer? Blank looks.
The Telegraph and most of the whiz bang HR methods and most of the cyber security systems don’t work. Cultural blind spots are a problem. Maybe smart software will prevent knowledge leakage. I think that some hard thinking needs to be applied to this problem. The Telegram write up does not tackle the job. I would assert that most organizations have fooled themselves. Billions and arrogance have interesting consequences.
Stephen E Arnold, November 13, 2024
Two New Coast Guard Cybersecurity Units Strengthen US Cyber Defense
November 13, 2024
Some may be surprised to learn the Coast Guard had one of the first military units to do signals intelligence. Early in the 20th century, the Coast Guard monitored radio traffic among US bad guys. It is good to see the branch pushing forward. “U.S. Coast Guard’s New Cyber Units: A Game Changer for National Security,” reveals a post from ClearanceJobs. The two units, the Coast Guard Reserve Unit USCYBER and 1941 Cyber Protection Team (CPT), will work with U.S. Cyber Command. Writer Peter Suciu informs us:
“The new cyber reserve units will offer service-wide capabilities for Coast Guardsman while allowing the service to retain cyber talent. The reserve commands will pull personnel from around the United States and will bring experience from the private and public sectors. Based in Washington, D.C., CPTs are the USCG’s deployable units responsible for offering cybersecurity capabilities to partners in the MTS [Marine Transportation System].”
Why tap reserve personnel for these units? Simple: valuable experience. We learn:
“‘Coast Guard Cyber is already benefitting from its reserve members,’ said Lt. Cmdr. Theodore Borny of the Office of Cyberspace Forces (CG-791), which began putting together these units in early 2023. ‘Formalizing reserves with cyber talent into cohesive units will give us the ability to channel a skillset that is very hard to acquire and retain.’”
The Coast Guard Reserve Unit will (mostly) work out of Fort Meade in Maryland, alongside the U.S. Cyber Command and the National Security Agency. The post reminds us the Coast Guard is unique: it operates under the Department of Homeland Security, while our other military branches are part of the Department of Defense. As the primary defender of our ports and waterways, brown water and blue water, we think the Coast Guard is well position capture and utilize cybersecurity intel.
Cynthia Murrell, November 13, 2024
Grooming Booms in the UK
November 12, 2024
The ability of the Internet to connect us to one another can be a beautiful thing. On the flip side, however, are growing problems like this one: The UK’s Independent tells us, “Online Grooming Crimes Reach Record Levels, NSPCC Says.” UK police recorded over 7,000 offenses in that country over the past year, a troubling new high. We learn:
“The children’s charity said the figures, provided by 45 UK police forces, showed that 7,062 sexual communication with a child offences were recorded in 2023-24, a rise of 89% since 2017-18, when the offence first came into force. Where the means of communication was disclosed – which was 1,824 cases – social media platforms were often used, with Snapchat named in 48% of those cases. Meta-owned platforms were also found to be popular with offenders, with WhatsApp named in 12% of those cases, Facebook and Messenger in 12% and Instagram in 6%. In response to the figures, the NSPCC has urged online regulator Ofcom to strengthen the Online Safety Act. It said there is currently too much focus on acting after harm has taken place, rather than being proactive to ensure the design of social media platforms does not contribute to abuse.”
Well, yes, that would be ideal. Specifically, the NSPCC states, regulations around private messaging must be strengthened. UK Minister Jess Phillips emphasizes:
“Social media companies have a responsibility to stop this vile abuse from happening on their platforms. Under the Online Safety Act they will have to stop this kind of illegal content being shared on their sites, including on private and encrypted messaging services, or face significant fines.”
Those fines would have to be significant indeed. Much larger than any levied so far, which are but a routine cost of doing business for these huge firms. But we have noted a few reasons to hope for change. Are governments ready to hold big tech responsible for the harms they facilitate?
Cynthia Murrell, November 12, 2024
Meta, AI, and the US Military: Doomsters, Now Is Your Chance
November 12, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid.
The Zuck is demonstrating that he is an American. That’s good. I found the news report about Meta and its smart software in Analytics India magazine interesting. “After China, Meta Just Hands Llama to the US Government to ‘Strengthen’ Security” contains an interesting word pair, “after China.”
What did the article say? I noted this statement:
Meta’s stance to help government agencies leverage their open-source AI models comes after China’s rumored adoption of Llama for military use.
The write up points out:
“These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership.” said Nick Clegg, President of Global Affairs in a blog post published from Meta.
Analytics India notes:
The announcement comes after reports that China was rumored to be using Llama for its military applications. Researchers linked to the People’s Liberation Army are said to have built ChatBIT, an AI conversation tool fine-tuned to answer questions involving the aspects of the military.
I noted this statement attributed to a “real” person at Meta:
Yann LecCun, Meta’s Chief AI scientist, did not hold back. He said, “There is a lot of very good published AI research coming out of China. In fact, Chinese scientists and engineers are very much on top of things (particularly in computer vision, but also in LLMs). They don’t really need our open-source LLMs.”
I still find the phrase “after China” interesting. Is money the motive for this open source generosity? Is it a bet on Meta’s future opportunities? No answers at the moment.
Stephen E Arnold, November 12, 2024
Bring Back Bell Labs…Wait, Google Did…
November 12, 2024
Bell Labs was once a magical, inventing wonderland and it established the foundation for modern communication, including the Internet. Everything was great at Bell Labs until projects got deadlines and creativity was stifled. Hackaday examines the history of the mythical place and discusses if there could ever be a new Bell Labs in, “What Would It Take To Recreate Bell Labs?”
Bell Labs employees were allowed to tinker on their projects for years as long as they focused on something to benefit the larger company. These fields ranges from metallurgy, optics, semiconductors, and more. Bell Labs worked with Western Electric and AT&T. These partnerships resulted in transistor, laser, photovoltaic cell, charge-coupled cell (CCD), Unix operating system, and more.
What made Bell Labs special was that inventors were allowed to let their creativity marinate and explore their ideas. This came to screeching halt in 1982 when the US courts ordered AT&T to breakup. Western Electric became Lucent Technologies and took Bell Labs with it. The creativity and gift of time disappeared too. Could Bell Labs exist today? No, not as it was. It would need to be updated:
The short answer to the original question of whether Bell Labs could be recreated today is thus a likely ‘no’, while the long answer would be ‘No, but we can create a Bell Labs suitable for today’s technology landscape’. Ultimately the idea of giving researchers leeway to tinker is one that is not only likely to get big returns, but passionate researchers will go out of their way to circumvent the system to work on this one thing that they are interested in.”
Google did have a new incarnation of Bell Labs. Did Google invent the Google Glass and billions in revenue from actions explained in the novel 1984?
Whitney Grace, November 12, 2024
Disinformation: Just a Click Away
November 11, 2024
Here is an interesting development. “CreationNetwork.ai Emerges As a Leading AI-Powered Platform, Integrating Over Twenty Two Tools,” reports HackerNoon. The AI aggregator uses Telegram plus other social media to push its service. Furthermore, the company is integrating crypto into its business plan. We expect these "blending" plays will become more common. The Chainwire press release says about this one:
“As an all-in-one solution for content creation, e-commerce, social media management, and digital marketing, CreationNetwork.ai combines 22+ proprietary AI-powered tools and 29+ platform integrations to deliver the most extensive digital ecosystem available. … CreationNetwork.ai’s suite of tools spans every facet of digital engagement, equipping users with powerful AI technologies to streamline operations, engage audiences, and optimize performance. Each tool is meticulously designed to enhance productivity and efficiency, making it easy to create, manage, and analyze content across multiple channels.”
See the write-up for a list of the tools included in CreationNetwork.ai, from AI Copywriter to Team-Powered Branding. The hefty roster of platform connections is also specified, including obvious players: all the major social media platforms, the biggest e-commerce platforms, and content creation tools like Canva, Grammarly, Adobe Express, Unsplash, and Dropbox. We learn:
“One of the most distinguishing features of CreationNetwork.ai is its extensive integration network. With over 29 integrations, users can synchronize their digital activities across major social media, e-commerce, and content platforms, providing centralized management and engagement capabilities. … This integration network empowers users to manage their brand presence across platforms from a single, unified dashboard, significantly enhancing efficiency and reach.”
Nifty. What a way to simplify digital processes for users. And to make it harder for new services to break into the market. But what groundbreaking platform would be complete without its own cryptocurrency? The write-up states:
“In preparation for its Initial Coin Offering (ICO), CreationNetwork.ai is launching a $750,000 CRNT Token Airdrop to reward early supporters and incentivize participation in the CreationNetwork.ai ecosystem. Qualified participants can secure their position by following CreationNetwork.ai’s social media accounts and completing the whitelist form available on the official website. This initiative highlights CreationNetwork.ai’s commitment to building a strong, engaged community.”
Online smart software is helpful in many ways.
Cynthia Murrell, November 11, 2024
The Bezos Bulldozer Could Stalls in a Nuclear Fuel Pool
November 11, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
Microsoft is going to flip a switch and one of Three Mile Islands’ nuclear units will blink on. Yeah. Google is investing in small nuclear power unit. But one, haul it to the data center of your choice, and plug it in. Shades of Tesla thinking. Amazon has also be fascinated by Cherenkov radiation which is blue like Jack Benny’s eyes.
A physics amateur learned about 880 volts by reading books on his Kindle. Thanks, MidJourney. Good enough.
Are these PR-tinged information nuggets for real? Sure, absolutely. The big tech outfits are able to do anything, maybe not well, but everything. Almost.
The “trusted” real news outfit (Thomson Reuters) published “US Regulators Reject Amended Interconnect for Agreement for Amazon Data Center.” The story reports as allegedly accurate information:
U.S. energy regulators rejected an amended interconnection agreement for an Amazon data center connected directly to a nuclear power plant in Pennsylvania, a filing showed on Friday. Members of the Federal Energy Regulatory Commission said the agreement to increase the capacity of the data center located on the site of Talen Energy’s Susquehanna nuclear generating facility could raise power bills for the public and affect the grid’s reliability.
Amazon was not inventing a function modular nuclear reactor using the better option thorium. No. Amazon just wanted to fun a few of those innocuous high voltage transmission line, plug in a converter readily available from one of Amazon’s third party merchants, and let a data center chock full of dolphin loving servers, storage devices, and other gizmos. What’s the big deal?
The write up does not explain what “reliability” and “national security” mean. Let’s just accept these as words which roughly translate to “unlikely.”
Is this an issue that will go away? My view is, “No.” Nuclear engineers are not widely represented among the technical professionals engaged in selling third-party vendors’ products, figuring out how to make Alexa into a barn burner of a product, or forcing Kindle users to smash their devices in frustration when trying to figure out what’s on their Kindle and what’s in Amazon’s increasingly bizarro cloud system.
Can these companies become nuclear adepts? Sure. Will that happen quickly? Nope. Why? Nuclear is specialized field and involves a number of quite specific scientific disciplines. But Amazon can always ask Alexa and point to its Ring door bell system as the solution to security concerns. The approach will impress regulatory authorities.
Stephen E Arnold, November 11, 2024