How about Those Commercial and US Government RFPs?

April 7, 2022

I am not familiar with an author whose name I will not put in my blog because of the Google-type systems’ stop word lists. The article is called “Your Competitor Wrote the RFP You Are Bidding On.” Some of the people who have worked on commercial bids are familiar with the process: Read the request for proposal, write the proposal, and win the proposal. Simple, eh. I have more familiarity with the swamp lands in Washington, DC. I have had an opportunity to observe how the sausage is made with regards to requests for information, requests for proposals, the proposals themselves, the decision mechanisms used to award a project, and the formal objections filed by bidders who did not win a contract. My observations are based on more than 50 years of work for government entities as well as some commercial projects for big outfits like the pre-Judge Green AT&T.

The write up states:

As a vendor, your job is to determine whether you want this. It’s costly to bid and often more costly to win. Spending absurd amounts of time across your org doing RFP submissions is rarely quantified from an ROI stand-point. If you’re in a type of business where RFP bids are involved from time to time, do your best to understand if it’s worth it. A typical RFP bid could take many hundreds of hours, from start to finish, especially if you progress past initial phases. Not only does this have easily quantifiable real costs, but the process also has runaway opportunity costs involving the product team, engineering, sales, legal and marketing.

I liked this observation:

The thing is, nobody really needs 80% of the [expletive deleted] that’s in an RFP, and they will never hold you to that. The implementation will take 3x longer than promised and your champions will no longer be with the company by the time you’re rolled-out anyway. By winning large RFP contracts, you will get buried, but not by the requirements you said Yes to. You’ll get buried trying to implement and retain this customer. Every week will be a new urgent requirement that was never covered in the RFP.

I want to point out that the reason the wordage and “wouldn’t it be nice” aspects of an RFP are included are often a result of inputs from consultants to the firm or the government agency. If these consultants have special skills, these will often be inserted into the RFP for the purpose of blocking competitors. There are other reasons too.

I look forward to more posts from so [expletive deleted] agile.

Stephen E Arnold, April 7, 2022

Google: Who Makes the Tweaks? Smart Software or Humanoids?

April 7, 2022

I read “Google Tweaks Search and News Results to Direct People to Trusted Sources.” The main idea is that Google wants to do good. Instead of letting people read any old news, the Google “will offer information literacy tips and highlight widely cited source.” That was quick. Google News became available in 2002. Let’s see. My math is no too good, but that sure looks like more than a week ago.

How are the tweaks implemented? That’s a good question. The write up reports:

Since last June, the company has applied labels to results for “rapidly evolving topics,” which include things like breaking news and viral videos that are spreading quickly. It may suggest checking back later for more details as they become clearer. Starting in the US (in English) today, the labels will include some information literacy tips.

Right. Google and it. Are the changes implemented by Snorkelized software learns on the fly what news is not Google quality? Or, will actual Googlers peruse news and decide what’s okay and what needs to be designated l’ordure?

My bet is on one thing. Google’s many protestations that its algorithms do the heavy lifting is a useful way to put on a ghillie suit and disappear from the censorship, editing, and down checking of the inferior information.

If my assumption is incorrect, I can protest and look for my pen. I am 77 and prone to forgetfulness. Google has digital ghillies. Lucky outfit.

Stephen E Arnold, April 7, 2022

Let the Smart Software Do It!

April 6, 2022

Eventually we will produce so much data it will be impossible for mere humans to manage it; AI will simply have to take over soon. This sums up the position of new Dynatrace CEO Rick McConnell as characterized in Diginomica‘s piece, “In Pursuit of General Intelligence—Dynatrace and the Death of the Dashboard.” Here’s a section heading that tickled our fancy: “The [bleeding] edge will make complexity more complex.” You don’t say? Writer Martin Banks describes McConnell perspective:

“Without a strong mixture of AI and operational management, the ability to generate any value out of the exploding growth of data will be difficult to maintain. Indeed, control may degrade enough to start reducing the value that can be created. For example, he sees potential growth in edge-related applications and consequent new growth in the data it will inevitably generate. This points to an underlying truth – that the ability for business users to move up the levels of abstraction, to stop seeing the data and instead see the questions and possible answers data represents – read words and sentences rather than see characters from an alphabet – will become essential for fast and effective business management. It will also play an increasingly important role in the management and development of the applications that will get used, especially as they grow to incorporate the edge into what will have to be a holistic soup-to-nuts business management solution. … The goal here is to completely automate out the need for manual intervention and interaction in tasks such as operations remediation.”

The write-up shares some notes about how Dynatrace approaches such automation. Banks also supplies example situations in which only the immediacy of AI will do, from a shopping cart that drops a users’ items to downtime in a large financial system. We see the logic behind these assertions, but there is one complication the article does not address—the already thorny and opaque problem of biased machine learning systems. It seems to us that without human oversight, that issue will only get worse.

Cynthia Murrell, April 6, 2022

System Glitches: A Glimpse of Our Future?

April 4, 2022

I read “Nearly All Businesses Hit by IT Downtime Last Year – Here’s What’s to Blame.” The write up reports:

More than three-quarters (75%) of businesses experienced downtime in 2021, up 25% compared to the previous year, new research has claimed. Cybersecurity firm Acronis polled more than 6,200 IT users and IT managers from small businesses and enterprises in 22 countries, finding that downtime stemmed from multiple sources, with system crashes (52%) being the most prevalent cause. Human error (42%) was also a major issue, followed by cyber attacks (36%) and insider attacks (20%).

Interesting. A cyber security company reports these data. The cyber security industry sector should know. Many of the smart systems have demonstrated that those systems are somewhat slow when it comes to safeguarding licensees.

What’s the cause of the issue?

There are “crashes.” But what’s a crash. Human error. Humans make mistakes and most of the software systems with which I am familiar are dumb: Blackmagic ATEM software which “forgets” that users drag and drop. Users don’t intuitively know to put an image one place and then put that image another so that the original image is summarily replaced. Windows Defender lights up when we test software from an outfit named Chris. Excel happily exports to PowerPoint but loses the format of the table when it is pasted. There are USB keys and Secure Digital cards which just stop working. Go figure. There are enterprise search systems which cannot display a document saved by a colleague before lunch. Where is it? Yeah, good question. In the indexing queue maybe? Oh, well, perhaps tomorrow the colleague will get the requested feedback?

My takeaway from the write up is that the wild and crazy, helter skelter approach to software and some hardware has created weaknesses, flaws, and dependencies no one knows about. When something goes south, the Easter egg hunt begins. A dead Android device elicits button pushing and the hope that the gizmo shows some signs of life. Mostly not in my experience.

Let’s assume the research is correct. The increase noted in the write up means that software and systems will continue to degrade. What’s the fix? Like many things — from making a government bureaucracy more effective to having an airline depart on time — seem headed on a downward path.

My take is that we are getting a glimpse of the future. Reality is very different from the perfectly functioning demo and the slick assertions in a PowerPoint deck.

Stephen E Arnold, April 4, 2022

Facebook: Fooled by Ranking?

April 1, 2022

I sure hope the information in “A Facebook Bug Led to Increased Views of Harmful Content Over Six Months.” The subtitle is interesting too. “The social network touts downranking as a way to thwart problematic content, but what happens when that system breaks?”

The write up explains:

Instead of suppressing posts from repeat misinformation offenders that were reviewed by the company’s network of outside fact-checkers, the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally.

Now let’s think about time. The article reports:

In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse people have to inherently engage with “more sensationalist and provocative” content. “Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content,” he wrote in a Facebook post at the time.

Why did this happen?

The answer may be that assumptions about the functionality of online systems must be verified by those who know the mechanisms used. Then the functions must be checked on a periodic business. The practice of slipstreaming changes may introduce malfunctions, which no one catches because no one is rewarded for slowing down the operation.

Based on my work for assorted reports and monographs, there are several other causes of a disconnect between what a high technology outfits and its systems actually do. Let me highlight what I call the Big Three:

  1. Explaining something that might be is different from delivering the reality of the system. Management wants to believe that code works, and not too many people want to be the person who says, “Yeah, this is what the system is actually doing?” Institutional momentum can crush certain types of behavior.
  2. The dependencies within complex software systems are not understood, particularly by recently hired outside experts, new hires, or — heaven help us — interns who are told to do X without meaningful checks, reviews, and fixes.
  3. An organization’s implicit policies keep feedback contained so the revenue continues to flow. Who gets promoted for screwing up ad sales? As a result, news releases, public statements, and sworn testimony operates in an adjacent but separate conceptual space from the mechanisms that generate live systems.

It has been my experience that when major problems are pointed out, reactions range from “What do you mean?” to a chuckled comment, “That’s just the way software works.”

What intrigues me is the larger question, “Is the revelation that Facebook smart software does not work as the company believed it did, the baseline for the company’s systems. On the other hand, the information could be an ill considered April Fool’s joke.

My hunch is that the article is not humor. Much of Facebook’s and Silicon Valley behavior does not tickly my funny bone. My prediction is that some US regulators and possibly Margrethe Vestager will take this information under advisement.

Stephen E Arnold, April 1, 2022

The Artificial Intelligence Balloon: Leaking a Bit, Eh?

March 30, 2022

I noted “Enterprise AI Needs to Deliver Real Value As Adoption Slows.” I am not able to define “real value,” but let’s not quibble. The write up reports that a survey from a publisher / conference organizer / Silicon Valley luminary has identified what might be a leaking hyperbole balloon.

I noted:

The latest annual AI Adoption in the Enterprise survey from O’Reilly finds that over the last two years the number of organizations with AI applications in production has remained steady at 26 percent. However, many enterprises still lack AI governance. Among respondents with AI products in production, the number of those whose organizations have a governance plan in place to oversee how projects are created, measured, and observed (49 percent) is roughly the same as those that don’t (51 percent).

But AI is the next big thing. Innovation will soar. Employees will be wallowing in extra time to do “human things.” Money will flow.

These statements are indeed true for Amazon, Facebook, Google, and a handful of other outfits. But for Bob’s Trucking Company or small accounting firm in the Rust Belt, well, not so much it seems.

The reason may be nestled in this comment in the article:

For years, AI has been the focus of the technology world,” says Mike Loukides, vice president of content strategy at O’Reilly and the report’s author. “Now that the hype has died down, it’s time for AI to prove that it can deliver real value, whether that’s cost savings, increased productivity for businesses, or building applications that can generate real value to human lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods for training AI models that can get around the biases and stereotypes that plague human decision-making.”

What’s the fix? Remediation of algorithmic biases, a shift to NFT innovation, or online gambling?

Those are questions for the little people. The largely unregulated giants are happy to do the smart software thing. Big value is well understood by these firms’ management teams.

Stephen E Arnold, March 30, 2022

Quick Question: Fabricated or Synthetic Data?

March 24, 2022

I read “Evidence of Fabricated Data in a Vitamin C trial by Paul E Marik et al in CHEST.” Non-reproducibility appears to be a function of modern statistical methods. Okay. The angle in this article is:

… within about 5 minutes of reading the study it became overwhelmingly clear that it is indeed research fraud and the data is (sic) fabricated.

Synthetic data are fabricated. Some big outfits are into using machine generated data sort of related to real life data to save money.

Here’s my question:

What’s the difference between fabricated data and synthetic data?

I am leaning to “not much.” One might argues that the motives in a research paper is tenure. In other applications, maybe the goal is just efficiency and its close friend money. My larger concern is that embedding fabricated and / or synthetic data into applications may lead to some unexpected consequences. Hey, how about that targeting of a kinetic? Screwy ad targeting is one thing, but less benign situations can be easily conceptualized; for example, “We’re sorry. That smart car self driving module did not detect your mom in the crosswalk.”

Stephen E Arnold, March 24, 2022

Technology Conferences: What Does Sponsorship Money Buy?

March 14, 2022

I have attended a number of conferences in my 50 year work career. Here’s what I learned:

  1. Giving a conference organizer money can provide access to the attendee list with phone number and email addresses
  2. Paying for a cocktail, breakfast, or some other gathering within the conference can include a speaking slot
  3. Supporting the conference with a payment can result in one of those logo-bedecked bags provided to each and every attendee whether the attendee wants the useless pouch or not
  4. Offering cash may allow a special information channel; for example, a contributor being able to provide a video commercial into each attendee’s hotel room. (Yep, I remember, the Chemical Abstracts’ video in London a decade ago)
  5. Coughing up money results in slots on the program and these talks are not on the last day of the conference at the tail end of the program day. Nope. These slots are keynotes or hour long masterpieces of PowerPointery.

Now you get the idea.

The Tech Industry Controls CS Conference Funding. What Are the Dangers?” explains a more interesting and somewhat more conceptual approach to paying conference organizers big money or small money over, under, or on the table.

The write up points out that a role in selecting the topics and who can talk. That’s the nifty part. The attendee perceives the conference lectures as neutral. Often a panel of “experts” reviews the abstracts and interacts with the speakers. Some conferences offer helpful guidelines. Do conference funding sources manipulate the knobs and dials of the program itself? Yep, conference organizers love to play ball, have favorites (Isn’t Google special?), and their own industry biases (How about quantum computing for intelligence professionals?)

I noted this statement in the cited article:

Relying on large companies and the resources they control can create significant limitations for the kinds of CS research that are proposed, funded and published. The tech industry plays a large hand in deciding what is and isn’t worthy of examination, or how issues are framed. For instance, a tech company might have a very different definition of privacy from that which is used by consumer rights advocates. But if the company is determining the parameters for the kinds of research it wishes to sponsor, it can choose to fund proposals that align with or uphold its own interpretation.

Let’s imagine a hypothetical conference about smart software. The funding entities are part of the Stanford AI Lab persuasion. What happens when Dr. Timnit Gebru and her fellow travelers propose a paper? In an objective, academic, Ivory Tower world, the papers are picked based on some arbitrary set of PhD-infused criteria? What happens if an IBM- or Google-type outfit funds the conference? Forget that Ivory Tower handwaving. The idea will be to advance the agenda. Snorkel, cognitive computing, whatever.

What happens when a history major with an MBA attends the conference looking for something in which to invest his financial firm’s hard earned assets? Is this MBA able to differentiate the goose feathers from the giblets? In my opinion, the MBA will select from the knowledge buffet. Think about a boxed conference lunch sponsored by the keynote speaker’s company? Do you want cheese with your chicken or do you want cheese? See there is a choice. In reality, one takes what is served.

Even “research” has been converted into information warfare. Objective and exciting, right?

Is the conference organizer complicit? Yep.

Stephen E Arnold March 14, 2022

Amazon: Does the Online Bookstore Sell Petards?

March 14, 2022

What happens when an Amazon wizard says something that allows a real news outfit to write:

In 2020, Jeff Bezos, then the company’s CEO, told the committee Amazon doesn’t allow staff to use data from individual sellers to make competing products, but couldn’t guarantee “that policy has never been violated.” Executives also said in testimony that the company doesn’t use seller data to copy products and then promote its versions in search results, despite reports to the contrary. Source: “DOJ Asked to Investigate Amazon over Possible Obstruction of Congress”?

What’s a petard? A search of Amazon reveals that it thinks it is a way to find a book in French which seems like to inflame Tennessee local school board officials. See “Peanut Butter: The Journal de Molly Fredickson”.

The petard of which I am thinking is “hoist by your own petard.” It means, according to the Free Dictionary:

Injured, ruined, or defeated by one’s own action, device, or plot that was intended to harm another; having fallen victim to one’s own trap or schemes. (“Hoist” in this instance is the past participle of the archaic verb “hoist,” meaning to be raised or lifted up. A “petard” was a bell-shaped explosive used to breach walls, doors, and so on.)

Saying one thing under oath and having elected officials learn facts that suggest otherwise is not a credibility booster.

Would senior wizards for the online bookstore dissemble?

Yep, just like some other executives when they say, “Senator, thank you for that question. I don’t know, but I will get back to you.”

Stephen E Arnold, March 14, 2022

IBM: Big Blue May Have Some Digital Re-Engineering to Explain

March 4, 2022

Yo, I am a dinobaby, and I am proud of that fact. You want proof. I know what a rotary dial phone is. I know how to use a facsimile machine. Heck, I can still crank out a mimeograph document. I even know how to get a drink from a terracotta jar in rural Brazil. (Love those chemicals and that wonky purple-blue color which reminds me of Big Blue.)

Several years ago, I read a blog by some IBM people which documented the harvesting of old workers. That blog disappeared, of course. It named managers, disclosed snippets of email, and did a fine job to making clear that oldsters had one function. The idea was that before finding their future elsewhere, the old employees would train their replacements. This is a variation on copying data from a DASD to a zippy new storage device, just with humanoids, not silicon.

I have been following the word dinobaby. I entered it into my log of jazzy new terms coined by millennials and GenXers. I put dinobaby between grosso modo memetic learning and vibe shift. This is not alphabetical I know, but I like the rhythm of the words when offered in a dinner conversation about technology.

The word appeared in  “IBM Executives Planned to Rid the Company of Older, Dinobaby Employees and Replace Them with Millennials, Lawsuit Alleges.” I thought the lawsuit was an interesting opportunity for legal eagles to generate some money.

Then I read the February 26, 2022, story “IBM Cannot Kill This Age-Discrimination Lawsuit Linked to CEO.” Despite Covid, financial turmoil, and the unfortunate events in Eastern Europe:

The judge overseeing an age-discrimination case against IBM has denied the IT giant’s motion to dismiss the lawsuit, citing evidence supporting plaintiff Eugen Schenfeld’s claim that CEO Arvind Krishna, then director of IBM research, made the decision to fire him.

The write up includes a link to a legal document and some snazzy code names; for example, Project Concord, Project Baccarat, and Project Ruby. It appears that each project was intended to get the big, noisy, weird dinobabies out of IBM’s life.

Not happening yet.

The write up asserts that there are more than 10,000 mainframe capable dinobabies vaporized by the “projects” implemented during the scintillating tenure of Ginni Rometty, former president and CEO of Big Blue. (Did you know that Ms. Rometty worked at General Motors, an esteemed automobile company which developed the Chevrolet Bolt, a model which caught on fire?  The owner was not Ginni Rometty. The burning GM vehicle was owned by  an elected official in Vermont.)

IBM may escape punishment for its alleged conversion of humanoids into dinobabies. But it will be interesting to follow the legal machinations which now seeks to transform dinobabies into hamsters and gerbils with mainframe and other esoteric skills.

Plus the lawyers can consult IBM Watson for inputs!

Stephen E Arnold,March 4, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta