Smart Software KN Handle This Query for Kia
January 16, 2023
I read “People Can’t Read New Kia Logo, Resulting in 30,000 Monthly Searches for “KN Car“. The issue seems to be a new logo which when viewed by me seems to read the letter K and the letter N. Since I am a dinobaby, I assumed that I was at fault. But, no. The write up states:
All told, just 56% of the 1,062 survey participants nailed it, while 44% could not correctly identify the letters. Furthermore, 26% of respondents guessed it says “KN”—which results in roughly 30,000 online searches for “KN car” a month, according to Rerev.
I think this means that even the sharp eyed devils (my classification phrase for those in the GenX, GenY, and Millennial cohorts) cannot figure out the logo either.
I conjured up some of the marketing speak used to sell this new design to the Kia deciders:
- “Daddy, I know you hired me, and I like my new logo. You must make it happen. Okay, daddy.” — A person hired via nepotism
- “The dynamic lines make a bold statement about the thrust of the entire Kia line.” — From a bright eyed college graduate with a degree in business who is walking through the design’s advantages
- Okay. Modern?” — The statement by the Song Ho-sung after listening to everyone in the logo meeting.
To me, just change the name Kia to KM. BYD Auto may be a bigger problem than a KN logo.
Stephen E Arnold, January 16, 2023
Billable Hours: The Practice of Time Fantasy
January 16, 2023
I am not sure how I ended up at a nuclear company in Washington, DC in the 1970s. I was stumbling along in a PhD program, fiddling around indexing poems for professors, and writing essays no one other than some PhD teaching the class would ever read. (Hey, come to think about it that’s the position I am in today. I write essays, and no one reads them. Progress? I hope not. I love mediocrity, and I am living in the Golden Age of meh and good enough.)
I recall arriving and learning from the VP of Human Resources that I had to keep track of my time. Hello? I worked on my own schedule, and I never paid attention to time. Wait. I did. I knew when the university’s computer center would be least populated by people struggling with IBM punch cards and green bar paper.
Now I have to record, according to Nancy Apple (I think that was her name): [a] The project number, [b] the task code, and [c] the number of minutes I worked on that project’s task. I pointed out that I would be shuttling around from government office to government office and then back to the Rockville administrative center and laboratory.
She explained that travel time had a code. I would have a project number, a task code for sitting in traffic on the Beltway, and a watch. Fill in the blanks.
As you might imagine, part of the learning curve for me was keeping track of time. I sort of did this, but as I become more and more engaged in the work about which I cannot speak, I filled in the time sheets every week. Okay, okay. I would fill in the time sheets when someone in Accounting called me and said, “I need your time sheets. We have to bill the client tomorrow. I want the time sheets now.”
As I muddled through my professional career, I understood how people worked and created time fantasy sheets. The idea was to hit the billable hour target without getting an auditor to camp out in my office. I thought of my learnings when I read “A Woman Who Claimed She Was Wrongly Dismissed Was Ordered to Repay Her Former Employer about $2,000 for Misrepresenting Her Working Hours.”
The write up which may or may not be written by a human states:
Besse [the time fantasy enthusiast] met with her former employer on March 29 last year. In a video recording of the meeting shared with the tribunal, she said: “Clearly, I’ve plugged time to files that I didn’t touch and that wasn’t right or appropriate in any way or fashion, and I recognize that and so for that I’m really sorry.” Judge Megan Stewart concluded that TimeCamp [the employee monitoring software watching the time fantasist] “likely accurately recorded” Besse’s work activities. She ordered Besse to compensate her former employer for a 50-hour discrepancy between her timesheets and TimeCamp’s records. In total, Besse was ordered to pay Reach a total of C$2,603 ($1,949) to compensate for wages and other payments, as well as C$153 ($115) in costs.
But the key passage for me was this one:
In her judgment, Stewart wrote: “Given that trust and honesty are essential to an employment relationship, particularly in a remote-work environment where direct supervision is absent, I find Miss Besse’s misconduct led to an irreparable breakdown in her employment relationship with Reach and that dismissal was proportionate in the circumstances.”
Far be it from me to raise questions, but I do have one: “Do lawyers engage in time fantasy billing?”
Of course not, “trust and honesty are essential.”
That’s good to know. Now what about PR and SEO billings? What about consulting firm billings?
If the claw back angle worked for this employer-employee set up, 2023 will be thrilling for lawyers, who obviously will not engage in time fantasy billing. Trust and honesty, right?
Stephen E Arnold, January 16, 2023
Reproducibility: Academics and Smart Software Share a Quirk
January 15, 2023
I can understand why a human fakes data in a journal article or a grant document. Tenure and government money perhaps. I think I understand why smart software exhibits this same flaw. Humans put their thumbs (intentionally or inadvertently) put their thumbs on the button setting thresholds and computational sequences.
The key point is, “Which flaw producer is cheaper and faster: Human or code?” My hunch is that smart software wins because in the long run it cannot sue for discrimination, take vacations, and play table tennis at work. The downstream consequence may be that some humans get sicker or die. Let’s ask a hypothetical smart software engineer this question, “Do you care if your model and system causes harm?” I theorize that at least one of the software engineer wizards I know would say, “Not my problem.” The other would say, “Call 1-8-0-0-Y-O-U-W-I-S-H and file a complaint.”
Wowza.
“The Reproducibility Issues That Haunt Health-Care AI” states:
a data scientist at Harvard Medical School in Boston, Massachusetts, acquired the ten best-performing algorithms and challenged them on a subset of the data used in the original competition. On these data, the algorithms topped out at 60–70% accuracy, Yu says. In some cases, they were effectively coin tosses1. “Almost all of these award-winning models failed miserably,” he [Kun-Hsing Yu, Harvard] says. “That was kind of surprising to us.”
Wowza wowza.
Will smart software get better? Sure. More data. More better. Think of the start ups. Think of the upsides. Think positively.
I want to point out that smart software may raise an interesting issue: Are flaws inherent because of the humans who created the models and selected the data? Or, are the flaws inherent in the algorithmic procedures buried deep in the smart software?
A palpable desire exists and hopes to find and implement a technology that creates jobs, rejuices some venture activities, and allows the questionable idea that technology to solve problems and does not create new ones.
What’s the quirk humans and smart software share? Being wrong.
Stephen E Arnold, January 15, 2023
The Intelware Sector: In the News Again
January 13, 2023
It’s Friday the 13th. Bad luck day for Voyager Labs, an Israel-based intelware vendor. But maybe there is bad luck for Facebook or Meta or whatever the company calls itself. Will there be more bad luck for outfits chasing specialized software and services firms?
Maybe.
The number of people interested in the savvy software and systems which comprise Israel’s intelware industry is small. In fact, even among some of the law enforcement and intelligence professionals whom I have encountered over the years, awareness of the number of firms, their professional and social linkages, and the capabilities of these systems is modest. NSO Group became the poster company for how some of these systems can be used. Not long ago, the Brennan Center made available some documents obtained via legal means about a company called Voyager Labs.
Now the Guardian newspaper (now begging for dollars with blue and white pleas) has published “Meta Alleges Surveillance Firm Collected Data on 600,000 Users via Fake Accounts.” the main idea of the write up is that an intelware vendor created sock puppet accounts with phony names. Under these fake identities, the investigators gathered information. The write up refers to “fake accounts” and says:
The lawsuit in federal court in California details activities that Meta says it uncovered in July 2022, alleging that Voyager used surveillance software that relied on fake accounts to scrape data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friends lists, photos, comments and information from groups and pages, according to the complaint. The affected users included employees of non-profits, universities, media organizations, healthcare facilities, the US armed forces and local, state and federal government agencies, along with full-time parents, retirees and union members, Meta said in its filing.
Let’s think about this fake account thing. How difficult is it to create a fake account on a Facebook property. About eight years ago as a test, my team created a fake account for a dog — about eight years ago. Not once in those eight years was any attempt to verify the humanness or the dogness of the animal. The researcher (a special librarian in fact) set up the account and demonstrated to others on my research team how the Facebook sign up system worked or did not work in this particularly example. Once logged in, faithful and trusting Facebook seemed to keep our super user logged into the test computer. For all I know, Tess is still logged in with Facebook doggedly tracking her every move. Here’s Tess:
Tough to see that Tess is not a true Facebook type, isn’t it?
Is the accusation directed at Voyager Labs a big deal? From my point of view, no. The reason that intelware companies use Facebook is that Facebook makes it easy to create a fake account, exercises minimal administrative review of registered user, and prioritizes other activities.
I personally don’t know what Voyager Labs did or did not do. I don’t care. I do know that other firms providing intelware have the capability of setting up, managing, and automating some actions of accounts for either a real human, an investigative team, or another software component or system. (Sorry, I am not at liberty to name these outfits.)
Grab your Tum’s bottle and consider these points:
- What other companies in Israel offer similar alleged capabilities?
- Where and when were these alleged capabilities developed?
- What entities funded start ups to implement alleged capabilities?
- What other companies offer software and services which deliver similar alleged capabilities?
- When did Facebook discover that its own sign up systems had become a go to source of social action for these intelware systems?
- Why did Facebook ignore its sign up procedures failings?
- Are other countries developing and investing in similar systems with these alleged capabilities? If so, name a company in England, France, China, Germany, or the US?
These one-shot “intelware is bad” stories chop indiscriminately. The vendors get slashed. The social media companies look silly for having little interest in “real” identification of registrants. The licensees of intelware look bad because somehow investigations are somehow “wrong.” I think the media reporting on intelware look silly because the depth of the information on which they craft stories strikes me as shallow.
I am pointing out that a bit more diligence is required to understand the who, what, why, when, and where of specialized software and services. Let’s do some heavy lifting, folks.
Stephen E Arnold, January 13, 2023
Becoming Sort of Invisible
January 13, 2023
When it comes to spying on one’s citizens, China is second to none. But at least some surveillance tech can be thwarted with enough time, effort, and creativity, we learn from Vice in, “Chinese Students Invent Coat that Makes People Invisible to AI Security Cameras.” Reporter Koh Ewe describes China’s current surveillance situation:
“China boasts a notorious state-of-the-art state surveillance system that is known to infringe on the privacy of its citizens and target the regime’s political opponents. In 2019, the country was home to eight of the ten most surveilled cities in the world. Today, AI identification technologies are used by the government and companies alike, from identifying ‘suspicious’ Muslims in Xinjiang to discouraging children from late-night gaming.”
Yet four graduate students at China’s Wuhan University found a way to slip past one type of surveillance with their InvisDefense coat. Resembling any other fashion camouflage jacket, the garment includes thermal devices that emit different temperatures to skew cameras’ infrared thermal imaging. In tests using campus security cameras, the team reduced the AI’s accuracy by 57%. That number could have been higher if they did not also have to keep the coat from looking suspicious to human eyes. Nevertheless, it was enough to capture first prize at the Huwei Cup cybersecurity contest.
But wait, if the students were working to subvert state security, why compete in a high-profile competition? The team asserts it was actually working to help its beneficent rulers by identifying a weakness so it could be addressed. According to researcher Wei Hui, who designed the core algorithm:
“The fact that security cameras cannot detect the InvisDefense coat means that they are flawed. We are also working on this project to stimulate the development of existing machine vision technology, because we’re basically finding loophole.”
And yet, Wei also stated,
“Security cameras using AI technology are everywhere. They pervade our lives. Our privacy is exposed under machine vision. We designed this product to counter malicious detection, to protect people’s privacy and safety in certain circumstances.”
Hmm. We learn the coat will be for sale to the tune of ¥500 (about $71). We are sure al list of those who purchase such a garment will be helpful, particularly to the Chinese government.
Cynthia Murrell, January 13, 2023
Social Media: Great for Surveillance, Not So Great for Democracy
January 13, 2023
Duh? Friday the 13th.
Respected polling organization the Pew Research Center studied the impact and views that social media has on democratic nations. According to the recent study: “Social Media Seen As Mostly Good For Democracy Across Many Nations, Except US Is A Major Outlier” the US does not like social media meddling with its politics.
The majority of the polled countries believed social media affected democracy positively and negatively. The US is a large outlier, however, because only 34% of its citizens viewed social media as beneficial while a whopping 64% believed the opposite. The US is not the only one that considered social media to cause division in politics:
“Even in countries where assessments of social media’s impact are largely positive, most believe it has had some pernicious effects – in particular, it has led to manipulation and division within societies. A median of 84% across the 19 countries surveyed believe access to the internet and social media have made people easier to manipulate with false information and rumors. A recent analysis of the same survey shows that a median of 70% across the 19 nations consider the spread of false information online to be a major threat, second only to climate change on a list of global threats.
Additionally, a median of 65% think it has made people more divided in their political opinions. More than four-in-ten say it has made people less civil in how they talk about politics (only about a quarter say it has made people more civil).”
Despite the US being an outlier, other nations used social media as a tool to be informed about news, helped raise public awareness, and allowed citizens to express themselves.
The majority of Americans who negatively viewed social media were affiliated with the Republican Party or were Independents. Democrats and their leaners were less likely to think social media is a bad influence. Younger people also believe social media is more beneficial than older generations.
Social media is another tool created by humans that can be good and bad. Metaphorically it is like a gun.
Whitney Grace, January 13, 2023
Semantic Search for arXiv Papers
January 12, 2023
An artificial intelligence research engineer named Tom Tumiel (InstaDeep) created a Web site called arXivxplorer.com.
According to his Twitter message (posted on January 7, 2023), the system is a “semantic search engine.” The service implements OpenAI’s embedding model. The idea is that this search method allows a user to “find the most relevant papers.” There is a stream of tweets at this link about the service. Mr. Tumiel states:
I’ve even discovered a few interesting papers I hadn’t seen before using traditional search tools like Google or arXiv’s own search function or even from the ML twitter hive mind… One can search for similar or “more like this” papers by “pasting the arXiv url directly” in the search box or “click the More Like This” button.
I ran several test queries, including this one: “Google Eigenvector.” The system surfaced generally useful papers, including one from January 2022. However, when I included the date 2023 in the search string, arXiv Xplorer did not return a null set. The system displayed hits which did not include the date.
Several quick observations:
- The system seems to be “time blind,” which is a common feature of modern search systems
- The system provides the abstract when one clicks on a link. The “view” button in the pop up displays the PDF
- Comparing result sets from the query with additional search terms surfaces papers reduces the result set size, a refreshing change from queries which display “infinite scrolling” of irrelevant documents.
For those interested in academic or research papers, will OpenAI become aware of the value of dates, limiting queries to endnotes, and displaying a relationship map among topics or authors in a manner similar to Maltego? By combining more search controls with the OpenAI content and query processing, the service might leapfrog the Lucene/Solr type methods. I think that would be a good thing.
Will the implementation of this system add to Google’s search anxiety? My hunch is that Google is not sure what causes the Google system to perturb ate. It may well be that the twitching, the sudden changes in direction, and the coverage of OpenAI itself in blogs may be the equivalent of tremors, soft speaking, and managerial dizziness. Oh, my, that sounds serious.
Stephen E Arnold, January 12, 2022
Spammers, Propagandists, and Phishers Rejoice: ChatGPT Is Here
January 12, 2023
AI-generate dart is already receiving tons of backlash from the artistic community and now writers should trade lightly because according to the No Film School said, “You Will Be Impacted By AI Writing…Here Is How.” Hollywood is not a friendly place, but it is certainly weird. Scriptwriters deal with all personalities, especially bad actors, who comment on their work. Now AI algorithms will offer notes on their scripts too.
ChatGPT is a new AI tool that blurs the line between art and aggregation because it can “help” scriptwriters with their work aka made writers obsolete:
“ChatGPT, and programs like it, scan the internet to help people write different prompts. And we’re seeing it begin to be employed by Hollywood as well. Over the last few days, people have gone viral on Twitter asking the AI interface to write one-act plays based on sentences you type in, as well as answer questions….This is what the program spat back out at me:
‘There is concern among some writers and directors in Hollywood that the use of AI in the entertainment industry could lead to the creation of content that is indistinguishable from human-generated content. This could potentially lead to the loss of jobs for writers and directors, as AI algorithms could be used to automate the process of creating content. Additionally, there is concern that the use of AI in Hollywood could result in the creation of content that is formulaic and lacks the creativity and uniqueness that is typically associated with human-generated content.’”
Egads, that is some good copy! AI automation, however, lacks the spontaneity of human creativity. But the machine generated prose is good enough for spammers, propagandists, phishers, and college students.
Humans are still needed to break the formulaic, status quo, but Hollywood bigwigs only see dollar signs and not art. AI create laughable stories, but they are getting better all the time. AI could and might automate the industry, but the human factor is still needed. The bigger question is: How will humanity’s role change in entertainment?
Whitney Grace, January 12, 2023
Internet Archive Scholar: Will Publishers Find a Way to Stomp This Free Knowledge Beast?
January 12, 2023
Here is a new search service worth noting. The Internet Archive Scholar was built to search the extensive, non-profit Internet Archive. The tool introduces itself:
“This full text search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and pre-prints crawled from the World Wide Web.”
Yes, that is a lot of information and a dedicated search system is a welcome addition. If only it were easier to find what one is looking for; the search leaves some on the Arnold IT team wanting more functionality. But the service is young, and the page notes that “Metadata is being improved and features have not been finalized.“
The About page tells us more about how the tool works, where the metadata comes from (fatcat.wiki), and where to direct certain queries. It also addresses the issue of text and data mining:
“We intend to provide researcher access to the full corpus for text and data mining purposes. Derived datasets may also be posted publicly for analysis, for example a citation graph or N-gram frequencies by year. If you are interested or would like to see specific datasets made available, please contact us.
Currently snapshots of the full fatcat metadata corpus and upstream metadata sources are uploaded periodically to the Bulk Bibliographic Metadata collection on archive.org. Read more in the Fatcat Guide.”
We look forward to seeing what functionality improvements the team implements as the Scholar is developed further. Readers may want to check it out for themselves and/or bookmark the site for future use. We are also curious about publishers’ reactions.
Cynthia Murrell, January 12, 2023
FAA Software: Good Enough?
January 11, 2023
Is today’s software good enough. For many, the answer is, “Absolutely.” I read “The FAA Grounded Every Single Domestic Flight in the U.S. While It Fixed Its Computers.” The article states what many people in affected airports knows:
The FAA alerted the public to a problem with the system at 6:29 a.m. ET on Twitter and announced that it had grounded flights at 7:19 a.m. ET. While the agency didn’t provide details on what had gone wrong with the system, known as NOTAM, Reuters reported that it had apparently stopped processing updated information. As explained by the FAA, pilots use the NOTAM system before they take off to learn about “closed runways, equipment outages, and other potential hazards along a flight route or at a location that could affect the flight.” As of 8:05 a.m. ET, there were 3,578 delays within, out, and into the U.S., according to flight-tracking website FlightAware.
NOTAM, for those not into government speak, means “Notice to Air Missions.”
Let’s go back in history. In the 1990s I think I was on the Board of the National Technical Information Service. One of our meetings was in a facility shared with the FAA. I wanted to move my rental car from the direct sunlight to a portion of the parking lot which would be shaded. I left the NTIS meeting, moved my vehicle, and entered through a side door. Guess what? I still remember my surprise when I was not asked for my admission key card. The door just opened and I was in an area which housed some FAA computer systems. I opened one of those doors and poked my nose in and saw no one. I shut the door, made sure it was locked, and returned to the NTIS meeting.
I recall thinking, “I hope these folks do software better than they do security.”
Today’s (January 11, 2023) FAA story reminded me that security procedures provide a glimpse to such technical aspects of a government agency as software. I had an engagement for the blue chip consulting firm for which I worked in the 1970s and early 1980s to observe air traffic control procedures and systems at one of the busy US airports. I noticed that incoming aircraft were monitored by printing out tail numbers and details of the flight, using a rubber band to affix these data to wooden blocks which were stacked in a holder on the air traffic control tower’s wall. A controlled knew the next flight to handle by taking the bottom most block, using the data, and putting the unused block back in a box on a table near the bowl of antacid tablets.
I recall that discussions were held about upgrading certain US government systems; for example, the IRS and the FAA computer systems. I am not sure if these systems were upgraded. My hunch is that legacy machines are still chugging along in facilities which hopefully are more secure than the door to the building referenced above.
My point is that “good enough” or “close enough for government work” is not a new concept. Many administrations have tried to address legacy systems and their propensity to [a] fail like the Social Security Agency’s mainframe to Web system, [b] not work as advertised; that is, output data that just doesn’t jibe with other records of certain activities (sorry, I am not comfortable naming that agency), or [c] are unstable because either funds for training staff, money for qualified contractors, or investments in infrastructure to keep the as is systems working in an acceptable manner.
I think someone other than a 78 year old should be thinking about the issue of technology infrastructure that, like Southwest Airlines’ systems, or the FAA’s system does not fail.
Why are these core systems failing? Here’s my list of thoughts. Note: Some of these will make anyone between 45 and 23 unhappy. Here goes:
- The people running agencies and their technology units don’t know what to do
- The consultants hired to do the work agency personnel should do don’t deliver top quality work. The objective may be a scope change or a new contract, not a healthy system
- The programmers don’t know what to do with IBM-type mainframe systems or other legacy hardware. These are not zippy mobile phones which run apps. These are specialized systems whose quirks and characteristics often have to be learned with hands on interaction. YouTube videos or a TikTok instructional video won’t do the job.
Net net: Failures are baked into commercial and government systems. The simultaneous of several core systems will generate more than annoyed airline passengers. Time to shift from “good enough” to “do the job right the first time”. See. I told you I would annoy some people with my observations. Well, reality is different from thinking about smart software will write itself.
Stephen E Arnold, January 11, 2023