Microsoft: That Old Time Religion Which Sort of Works

November 15, 2024

Having a favorite OS can be akin to being in a technology cult or following a popular religion. Apple people are experienced enthusiasts, Linux users are the odd ones because it has a secret language and handshakes, while Microsoft is vanilla with diehard followers. Microsoft apparently loves its users and employees to have this mantra and feed into it says Edward Zitron of Where’s Your Ed At? in the article, “The Cult Of Microsoft.”

Zitron reviewed hundreds of Microsoft’s internal documents and spoke with their employees about the company culture. He learned that Microsoft subscribed to “The Growth Mindset” and it determines how far someone will go within the hallowed Redmond halls. There are two types of growth mindset: you can learn and change to continue progressing or you believe everything is immutable (aka the fixed mindset).

Satya Nadella even wrote a bible of sorts called Hit Refresh that discusses The Growth Mindset. Zitron purports that Nadella wants to setup himself up as a messianic figure and used his position to claim a place at the top of the bestseller list. How? He “urged” his Microsoft employees to discuss Hit Refresh with as many people as possible. The communication methods he had his associates use was like a pyramid scheme aka a multi-level marketing ploy.

Microsoft is as fervent of following The Growth Mindset as women used to be selling Mary Kay and Avon products. The problem, Zitron reports, is that it has little to do with actual improvement. The Growth Mindset can’t be replicated without the presence of the original creator.

“In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:
‘MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.’

Turning to another view: Wenner Moyer’s piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people’s data…”

Microsoft has employees write biannual self-performance reviews called Connects. Everyone hates them but if the employees want raises and to keep their jobs then they have to fill out those forms. What’s even more demeaning is that Copilot is being used to write the Connects. Copilot is throwing out random metrics and achievements that don’t have a basis on any facts.

Is the approach similar to a virtual pyramid scheme. Are employees are taught or hired to externalize their success and internalize their failures. If something the Big Book of MSFT provides grounding in the Redmond way.

Mr. Nadella strikes me as having adopted the principles and mantra of a cult. Will the EU and other regulatory authorities bow before the truth or act out their heresies?

Whitney Grace, November 15, 2024

A Digital Flea Market Tests Smart Software

November 14, 2024

Sales platform eBay has learned some lessons about deploying AI. The company tested three methods and shares its insights in the post, “Cutting Through the Noise: Three Things We’ve Learned About Generative AI and Developer Productivity.” Writer Senthil Padmanabhan explains:

“Through our AI work at eBay, we believe we’ve unlocked three major tracks to developer productivity: utilizing a commercial offering, fine-tuning an existing Large Language Model (LLM), and leveraging an internal network. Each of these tracks requires additional resources to integrate, but it’s not a matter of ranking them ‘good, better, or best.’ Each can be used separately or in any combination, and bring their own benefits and drawbacks.”

The company could have chosen from several existing commercial AI offerings. It settled on GitHub Copilot for its popularity with developers. That and the eBay codebase was already on GitHub. They found the tool boosted productivity and produced mostly accurate documents (70%) and code (60%). The only problem: Copilot’s limited data processing ability makes it impractical for some applications. For now.

To tweak and train an open source LLM, the team chose Code Llama 13B. They trained the camelid on eBay’s codebase and documentation. The resulting tool reduced the time and labor required to perform certain tasks, particularly software upkeep. It could also sidestep a problem for off-the-shelf options: because it can be trained to access data across internal services and within non-dependent libraries, it can get to data the commercial solutions cannot find. Thereby, code duplication can be avoided. Theoretically.

Finally, the team used an Retrieval Augmented Generation to synthesize documentation across disparate sources into one internal knowledge base. Each piece of information entered into systems like Slack, Google Docs, and Wikis automatically received its own vector, which was stored in a vector database. When they queried their internal GPT, it quickly pulled together an answer from all available sources. This reduced the time and frustration of manually searching through multiple systems looking for an answer. Just one little problem: Sometimes the AI’s responses were nonsensical. Were any just plain wrong? Padmanabhan does not say.

The post concludes:

“These three tracks form the backbone for generative AI developer productivity, and they keep a clear view of what they are and how they benefit each project. The way we develop software is changing. More importantly, the gains we realize from generative AI have a cumulative effect on daily work. The boost in developer productivity is at the beginning of an exponential curve, which we often underestimate, as the trouble with exponential growth is that the curve feels flat in the beginning.”

Okay, sure. It is all up from here. Just beware of hallucinations along the way. After all, that is one little detail that still needs to be ironed out.

Cynthia Murrell, November 14, 2024

Pragmatism or the Normalization of Good Enough

November 14, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.

I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.

image

Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.

I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:

All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.

Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.

I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.

One of the touchstones in the essay, in my opinion, is this statements:

Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.

I urge you to read the complete original essay.

Several observations:

  1. Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
  2. The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
  3. By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.

Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?

Stephen E Arnold, November 14, 2024

Marketers, Deep Fakes Work

November 14, 2024

Bad actors use AI for pranks all the time, but this could be the first time AI pranked an entire Irish town out of its own volition. KXAN reports on the folly: “AI Slop Site Sends Thousands In Ireland To Fake Halloween Parade.” The website MySpiritHalloween.com dubs itself as the ultimate resource for all things Halloween. The website uses AI-generated content and an article told the entire city of Dublin that there would be a parade.

If this was a small Irish village, there would giggles and the police would investigate criminal mischief before they stopped wasting their time. Dublin, however, is one of the country’s biggest cities and folks appeared in the thousands to see the Halloween parade. They eventually figured something was wrong without the presence of barriers, law enforcement, and (most importantly) costume people on floats!

MySpiritHalloween is owned by Naszir Ali and he was embarrassed by the situation.

Per Ali’s explanation, his SEO agency creates websites and ranks them on Google. He says the company hired content writers who were in charge of adding and removing events all across the globe as they learned whether or not they were happening. He said the Dublin event went unreported as fake and that the website quickly corrected the listing to show it had been cancelled….

Ali said that his website was built and helped along by the use of AI but that the technology only accounts for 10-20% of the website’s content. He added that, according to him, AI content won’t completely help a website get ranked on Google’s first page and that the reason so many people saw MySpiritHalloween.com was because it was ranked on Google’s first page — due to what he calls “80% involvement” from actual humans.”

Ali claims his website is based in Illinois, but all investigations found that its hosted in Pakistan. Ali’s website is one among millions that use AI-generated content to manipulate Google’s algorithm. Ali is correct that real humans did make the parade rise to the top of Google’s search results, but he was responsible for the content.

Media around the globe took this as an opportunity to teach the parade goers and others about being aware of AI-generated scams. Marketers, launch your AI infused fakery.

Whitney Grace, November 14, 2024

Smart Software: It May Never Forget

November 13, 2024

A recent paper challenges the big dogs of AI, asking, “Does Your LLM Truly Unlearn? An Embarrassingly Simple Approach to Recover Unlearned Knowledge.” The study was performed by a team of researchers from Penn State, Harvard, and Amazon and published on research platform arXiv. True or false, it is a nifty poke in the eye for the likes of OpenAI, Google, Meta, and Microsoft, who may have overlooked  the obvious. The abstract explains:

“Large language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible.”

But AI firms may be fooling themselves about this method. We learn:

“Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the ‘forgotten’ information.”

Oops. The team found as much as 83% of data thought forgotten was still there, lurking in the shadows. The paper offers a explanation for the problem and suggestions to mitigate it. The abstract concludes:

“Altogether, our study underscores a major failure in existing unlearning methods for LLMs, strongly advocating for more comprehensive and robust strategies to ensure authentic unlearning without compromising model utility.”

See the paper for all the technical details. Will the big tech firms take the researchers’ advice and improve their products? Or will they continue letting their investors and marketing departments lead them by the nose?

Cynthia Murrell, November 13, 2024

Insider Threats: More Than Threat Reports and Cumbersome Cyber Systems Are Needed

November 13, 2024

dino orange_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.

With actionable knowledge becoming increasingly concentrated, is it a surprise that bad actors go where the information is? One would think that organizations with high-value information would be more vigilant when it comes to hiring people from other countries, using faceless gig worker systems, or relying on an AI-infused résumé on LinkedIn. (Yep, that is a Microsoft entity.)

image

Thanks, OpenAI. Good enough.

The fact is that big technology outfits are supremely confident in their ability to do no wrong. Billions in revenue will boost one’s confidence in a firm’s management acumen. The UK newspaper Telegraph published “Why Chinese Spies Are Sending a Chill Through Silicon Valley.”

The write up says:

In recent years the US government has charged individuals with stealing technology from companies including Tesla, Apple and IBM and seeking to transfer it to China, often successfully. Last year, the intelligence chiefs of the “Five Eyes” nations clubbed together at Stanford University – the cradle of Silicon Valley innovation – to warn technology companies that they are increasingly under threat.

Did the technology outfits get the message?

The Telegram article adds:

Beijing’s mission to acquire cutting edge tech has been given greater urgency by strict US export controls, which have cut off China’s supply of advanced microchips and artificial intelligence systems. Ding, the former Google employee, is accused of stealing blueprints for the company’s AI chips. This has raised suspicions that the technology is being obtained illegally. US officials recently launched an investigation into how advanced chips had made it into a phone manufactured by China’s Huawei, amid concerns it is illegally bypassing a volley of American sanctions. Huawei has denied the claims.

With some non US engineers and professionals having skills needed by some of the high-flying outfits already aloft or working their hangers to launch their breakthrough product or service, US companies go through human resource and interview processes. However, many hires are made because a body is needed, someone knows the candidate, or the applicant is willing to work for less money than an equivalent person with a security clearance, for instance.

The result is that most knowledge centric organizations have zero idea about the security of their information. Remember Edward Snowden? He was visible. Others are not.

Let me share an anecdote without mentioning names or specific countries and companies.

A business colleague hailed from an Asian country. He maintained close ties with his family in his country of origin. He had a couple of cousins who worked in the US. I was at his company which provided computer equipment to the firm at which I was working in Silicon Valley. He explained to me that a certain “new” technology was going to be released later in the year. He gave me an overview of this “secret” project. I asked him where the data originated. He looked at me and said, “My cousin. I even got a demo and saw the prototype.”

I want to point out that this was not a hire. The information flowed along family lines. The sharing of information was okay because of the closeness of the family. I later learned the information was secret. I realized that doing an HR interview process is not going to keep secrets within an organization.

I ask the companies with cyber security software which has an insider threat identification capability, “How do you deal with family or high-school relationship information channels?”

The answer? Blank looks.

The Telegraph and most of the whiz bang HR methods and most of the cyber security systems don’t work. Cultural blind spots are a problem. Maybe smart software will prevent knowledge leakage. I think that some hard thinking needs to be applied to this problem. The Telegram write up does not tackle the job. I would assert that most organizations have fooled themselves. Billions and arrogance have interesting consequences.

Stephen E Arnold, November 13, 2024

Two New Coast Guard Cybersecurity Units Strengthen US Cyber Defense

November 13, 2024

Some may be surprised to learn the Coast Guard had one of the first military units to do signals intelligence. Early in the 20th century, the Coast Guard monitored radio traffic among US bad guys. It is good to see the branch pushing forward. “U.S. Coast Guard’s New Cyber Units: A Game Changer for National Security,” reveals a post from ClearanceJobs. The two units, the Coast Guard Reserve Unit USCYBER and 1941 Cyber Protection Team (CPT), will work with U.S. Cyber Command. Writer Peter Suciu informs us:

“The new cyber reserve units will offer service-wide capabilities for Coast Guardsman while allowing the service to retain cyber talent. The reserve commands will pull personnel from around the United States and will bring experience from the private and public sectors. Based in Washington, D.C., CPTs are the USCG’s deployable units responsible for offering cybersecurity capabilities to partners in the MTS [Marine Transportation System].”

Why tap reserve personnel for these units? Simple: valuable experience. We learn:

“‘Coast Guard Cyber is already benefitting from its reserve members,’ said Lt. Cmdr. Theodore Borny of the Office of Cyberspace Forces (CG-791), which began putting together these units in early 2023. ‘Formalizing reserves with cyber talent into cohesive units will give us the ability to channel a skillset that is very hard to acquire and retain.’”

The Coast Guard Reserve Unit will (mostly) work out of Fort Meade in Maryland, alongside the U.S. Cyber Command and the National Security Agency. The post reminds us the Coast Guard is unique: it operates under the Department of Homeland Security, while our other military branches are part of the Department of Defense. As the primary defender of our ports and waterways, brown water and blue water, we think the Coast Guard is well position capture and utilize cybersecurity intel.

Cynthia Murrell, November 13, 2024

Grooming Booms in the UK

November 12, 2024

The ability of the Internet to connect us to one another can be a beautiful thing. On the flip side, however, are growing problems like this one: The UK’s Independent tells us, “Online Grooming Crimes Reach Record Levels, NSPCC Says.” UK police recorded over 7,000 offenses in that country over the past year, a troubling new high. We learn:

“The children’s charity said the figures, provided by 45 UK police forces, showed that 7,062 sexual communication with a child offences were recorded in 2023-24, a rise of 89% since 2017-18, when the offence first came into force. Where the means of communication was disclosed – which was 1,824 cases – social media platforms were often used, with Snapchat named in 48% of those cases. Meta-owned platforms were also found to be popular with offenders, with WhatsApp named in 12% of those cases, Facebook and Messenger in 12% and Instagram in 6%. In response to the figures, the NSPCC has urged online regulator Ofcom to strengthen the Online Safety Act. It said there is currently too much focus on acting after harm has taken place, rather than being proactive to ensure the design of social media platforms does not contribute to abuse.”

Well, yes, that would be ideal. Specifically, the NSPCC states, regulations around private messaging must be strengthened. UK Minister Jess Phillips emphasizes:

“Social media companies have a responsibility to stop this vile abuse from happening on their platforms. Under the Online Safety Act they will have to stop this kind of illegal content being shared on their sites, including on private and encrypted messaging services, or face significant fines.”

Those fines would have to be significant indeed. Much larger than any levied so far, which are but a routine cost of doing business for these huge firms. But we have noted a few reasons to hope for change. Are governments ready to hold big tech responsible for the harms they facilitate?

Cynthia Murrell, November 12, 2024

Meta, AI, and the US Military: Doomsters, Now Is Your Chance

November 12, 2024

dino orange_thumb_thumb_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid.

The Zuck is demonstrating that he is an American. That’s good. I found the news report about Meta and its smart software in Analytics India magazine interesting. “After China, Meta Just Hands Llama to the US Government to ‘Strengthen’ Security” contains an interesting word pair, “after China.”

What did the article say? I noted this statement:

Meta’s stance to help government agencies leverage their open-source AI models comes after China’s rumored adoption of Llama for military use.

The write up points out:

“These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership.” said Nick Clegg, President of Global Affairs in a blog post published from Meta.

Analytics India notes:

The announcement comes after reports that China was rumored to be using Llama for its military applications. Researchers linked to the People’s Liberation Army are said to have built ChatBIT, an AI conversation tool fine-tuned to answer questions involving the aspects of the military.

I noted this statement attributed to a “real” person at Meta:

Yann LecCun, Meta’s Chief AI scientist, did not hold back. He said, “There is a lot of very good published AI research coming out of China. In fact, Chinese scientists and engineers are very much on top of things (particularly in computer vision, but also in LLMs). They don’t really need our open-source LLMs.”

I still find the phrase “after China” interesting. Is money the motive for this open source generosity? Is it a bet on Meta’s future opportunities? No answers at the moment.

Stephen E Arnold, November 12, 2024

Bring Back Bell Labs…Wait, Google Did…

November 12, 2024

Bell Labs was once a magical, inventing wonderland and it established the foundation for modern communication, including the Internet. Everything was great at Bell Labs until projects got deadlines and creativity was stifled. Hackaday examines the history of the mythical place and discusses if there could ever be a new Bell Labs in, “What Would It Take To Recreate Bell Labs?”

Bell Labs employees were allowed to tinker on their projects for years as long as they focused on something to benefit the larger company. These fields ranges from metallurgy, optics, semiconductors, and more. Bell Labs worked with Western Electric and AT&T. These partnerships resulted in transistor, laser, photovoltaic cell, charge-coupled cell (CCD), Unix operating system, and more.

What made Bell Labs special was that inventors were allowed to let their creativity marinate and explore their ideas. This came to screeching halt in 1982 when the US courts ordered AT&T to breakup. Western Electric became Lucent Technologies and took Bell Labs with it. The creativity and gift of time disappeared too. Could Bell Labs exist today? No, not as it was. It would need to be updated:

The short answer to the original question of whether Bell Labs could be recreated today is thus a likely ‘no’, while the long answer would be ‘No, but we can create a Bell Labs suitable for today’s technology landscape’. Ultimately the idea of giving researchers leeway to tinker is one that is not only likely to get big returns, but passionate researchers will go out of their way to circumvent the system to work on this one thing that they are interested in.”

Google did have a new incarnation of Bell Labs. Did Google invent the Google Glass and billions in revenue from actions explained in the novel 1984?

Whitney Grace, November 12, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta