Adobe-Pantone, Has an Innovator Covered You You with Freetone Brown?
October 31, 2022
Annoyed about the loss of Pantone colors in Adobe products? I am okay with Affinity and assorted open source tools, so what the innovation free outfits do is not of consequence to me.
Should you want an open source alternative, a color wizard named Stuart Semple has a solution. The details of the colors or “frequencies” as I think of them appear in “I’ve Libertated [sic] the Pantone Colour Palette and I’m Giving It Away for Free Unless You Work for Adobe.”
The colors can be downloaded at this link. If the link goes dead, navigate to Culture Hustle and hunt around for the download link. Even though the palette is free, you will be coughing up your email address and some other potentially interesting information.
Several observations:
- The cloud monetization plays are likely to stimulate some innovation. The vector of new angles will be designed to block, undercut, undermine, or discredit the corporate cleverness
- As options become available, increased friction in work processes will result. File formats, digital fingerprints, and embedded sequences similar to those used in steganography will derail some activities. Getting back on track will consume time and resources
- User groups are dangerous constructs. In person groups are less volatile than online communities. Clever corporates may find themselves locked in an unpleasant and litigious social dust up.
Check out those Freetone browns. What does the color suggest?
Stephen E Arnold, October 31, 2022
What Is the Color of Greed or Will a Color Picker Land You in Court?
October 31, 2022
When I arrived in Washington, DC, for my first real job at a nuclear consulting company loved by Richard Cheney, I found myself responsible for a contractor on K Street. At that meeting, the contractor explained that the Cheney fave used a specific color of blue to indicate nuclear radiation. Do you have a color in mind for Cherenkov radiation. I do. The printed color came from a thick and somewhat weird collection of color samples bound with a rivet through the heavy pages. Each page contained a group of colors; for example, PMS 313. I said, “Okay, with me.” (The P represents Pantone; the numbers are the presumably proprietary colors once happily confided to the dead tree printing world.)
On my Mac I have an application called ColorSlurp. No printed collection of color chips needed. Just look at a picture in Yandex images for Cherenkov radiation and click on a color. I can then use that color in a painting application like the estimable Paint.net software.
The color technology seems like magic to me. I can, for example, create a pdf of the goose which I use for my logo tinted a wonderful mélange of dead leaf brown and feather gray. Am I in legal jeopardy?
I just read “ You’re Going to Have To Pay to Use Some Fancy Colors In Photoshop Now.” The article explains much about color intellectual property and nothing about frequency. However, I noted this statement:
widely used Adobe apps like Photoshop, Illustrator, and InDesign will no longer support Pantone-owned colors for free, and those wishing for those colors to appear in their saved files will need to pay for a separate license. And this is real life.
Okay, a subscription to a frequency. I assume this makes sense to CPAs, MBAs, and the Adobe/Pantone crowd.
The point is that cloud services make it easy to monetize that which was more difficult to monetize in Gutenberg’s day.
I think we have discovered a color for greed. That color is linked to the color of attorneys and legal eagle feathers. I don’t want to name a color, present a P number, or include its frequency.
Let’s think about “real life.” Pleasant, isn’t it. What color of brown are the walls in most courtrooms tinted? There must be a PMS number for that. I think it is a combo of fertile loam and Cherenkov radiation. If you see it, it is too late.
Stephen E Arnold, October 31, 2022
An Essay about Big Data Analytics: Trouble Amping Up
October 31, 2022
I read “What Moneyball for Everything Has Done to American Culture.” Who doesn’t love a thrilling data analytics story? Let’s narrow the scope of the question: What MBA, engineer, or Certified Financial Analyst doesn’t love a thrilling data analytics story?
Give up? The answer is 99.9 percent emit adrenaline and pheromone in copious quantities. Yeah, baby. Winner!
The essay in the “we beg for dollars politely” publication asserts:
The analytics revolution, which began with the movement known as Moneyball, led to a series of offensive and defensive adjustments that were, let’s say, _catastrophically successful_. Seeking strikeouts, managers increased the number of pitchers per game and pushed up the average velocity and spin rate per pitcher. Hitters responded by increasing the launch angles of their swings, raising the odds of a home run, but making strikeouts more likely as well. These decisions were all legal, and more important, they were all _correct_ from an analytical and strategic standpoint.
Well, that’s what makes outfits similar to Google-type, Amazon-type, and TikTok-type outfits so darned successful. Data analytics and nifty algorithms pay off. Moneyball!
The essay notes:
The sport that I fell in love with doesn’t really exist anymore.
Is the author talking about baseball or is the essaying pinpointing what’s happened in high technology user land?
My hunch is that baseball is a metaphor for the outstanding qualities of many admired companies. Privacy? Hey, gone. Security? There is a joke worthy of vaudeville. Reliability? Ho ho ho. Customer service from a person who knows a product? You have to be kidding.
I like the last paragraph:
Cultural Moneyballism, in this light, sacrifices exuberance for the sake of formulaic symmetry. It sacrifices diversity for the sake of familiarity. It solves finite games at the expense of infinite games. Its genius dulls the rough edges of entertainment. I think that’s worth caring about. It is definitely worth asking the question: In a world that will only become more influenced by mathematical intelligence, can we ruin culture through our attempts to perfect it?
Unlike a baseball team’s front office, we can’t fire these geniuses when the money is worthless and the ball disintegrates due to a lack of quality control.
Stephen E Arnold, October 31, 2022
When the Non-Googley Display Their Flaws, Miscommunication Results
October 31, 2022
If you are Googley, you understand the value of the Google way. You embrace abandoned products because smart people do not get bonuses working on loser services. You advocate for new ways to generate revenue because losers have to pony up cash to pay for salaries. You ignore the bleats of the lesser creatures because those lower on the Great Chain of Digital Being deserve their mollusk status.
I want to point out that the article “How Google’s Ad Business Funds Disinformation Around the World” illustrates the miscommunication between the Googlers and the Rest of the World. With ignorance on display, little wonder the free services of the online services company are neither appreciated nor understood.
Consider advertising.
Smart software does not make errors. If a non Googley person objects to an advertisement which pitches certain products and services, it is the responsibility of the “user” to discern the issue and ignore the message. Smart software informed by synthetic data and functionality of Oingo identifies interests and displays content. By definition, the non Googley fail to appreciate the sophistication of the Google method. Hence, how can these non Googley mollusks perceive the benefits of the Googlers.
The cited article purports to provide proof (not big data, not psychological profiles based on user history, and not fancy math informed by decades of sophisticated management actions) that something is amiss in the world of Alphabet Google YouTube and DeepMind Land. Here’s an example:
The investigation also revealed that Google routinely places ads on sites pushing falsehoods about COVID-19 and climate change in French-, German- and Spanish-speaking countries.
Where’s the beef? By definition, the non Googley have to decide what’s on the money or not. If one has flawed mental equipment, the failure to understand Google is not Google’s problem. It is the way of the world.
Google has a business model which works. True. Google did have to pay to avoid a legal hassle with Yahoo for the online ad furniture before the Google IPO. But in the Google, good ideas are, by definition, Google’s. Therefore, getting caught in a Web of insinuations is further proof that a gulf separates the Googley from the non Googley. Maggots, remember?
The cited article presents examples from countries which provide a small percentage of Google experts. It makes sense that those who are non Googley would apply their limited intelligence and analytic skills to countries with certain flaws. Google’s smart software makes smart decisions, and the failure to recognize the excellence of Google’s methods are, by definition, a problem but not for Google. Come on. Serbia? Turkey? France? Where are these entities on the Great Chain of Digital Being? At the top? France has more types of cheese than Googlers I think.
Net net: Criticize Meta. Take a look at the Apple tax. Examine the dead squirrels crushed by the Bezos bulldozer. Those are lesser firms which are well suited to scrutiny by the non Googley. So if you don’t work at Google, how can you understand the excellence of Googlers? Answer: You cannot.
Stephen E Arnold, October 31, 2022
Musky Metaphor: The Sink or Free for All Hellscape?
October 28, 2022
I read “Elon Musk Visits Twitter Carrying Sink As Deal Looms.” The write up (after presenting me with options to sign in, click a free account, or just escape the pop up) reported:
In business parlance, “kitchen sinking” means taking radical action at a company, though it is not clear if this was Mr Musk’s message – he also updated his Twitter bio to read “chief twit”. Mr Musk has said the social media site needs significant changes. At least one report has suggested he is planning major job cuts.
There was a photo, presumably copyright crowned, showing the orbital Elon Musk carrying a kitchen sink. A quick check of kitchen appliance vendors provided some examples of a kitchen sink:
I compared this sink with the one in the Beeb’s illustration and learned:
- Mr. Musk chose a white sink
- The drain was visible
- Mr. Musk’s “load” was a bit larger than a Starlink antenna
Now what’s the metaphor? Wikipedia is incredibly helpful when trying to figure out certain allusions of very bright inventors of incredible assertions about self driving software.
Wikipedia suggests:
- Freaks of Nature (film), a 2015 comedy horror film, also known as Kitchen Sink
- Kitchen Sink, a 1989 horror short directed by Alison Maclean
- Kitchen Sink (TV series), cookery series on Food Network
- “Kitchen Sink”, a song by Twenty One Pilots from their album Regional at Best
- Kitchen Sink (album), an album by Nadine Shah, 2020
- Kitchen Sink Press, an independent comic book publisher
- Kitchen sink realism, a British cultural movement in the late 1950s and early 1960s
- Kitchen sink syndrome, also known as “scope creep” in project management
- Kitchen sink regression, a usually pejorative term for a regression analysis which uses a long list of possible independent variables
- A sink in a kitchen for washing dishes, vegetables, etc.
I think these are incorrect.
My mind associates the kitchen sink with:
- Going down the drain; that is, get rid of dirty water, food scraps, and soluble substances (mostly soluble if I remember what I learned from engineers at the CW Rice Engineering Company)
- An opening into which objects can fall; for example, a ring, grandma’s silver baby spoon, or the lid to a bottle of Shaoxing wine. The allusion becomes “going down the drain” equates to a fail whale
- A collection point for discarded vegetable matter, bits of meat with bone, fish heads, or similar detritus. Yep, fish heads.
What’s your interpretation of the Musky kitchen sink? Scope creep from Wikipedia or mine, going down the drain? Nah, hellscape.
Be sure to tweet your answer?
Stephen E Arnold, October 28, 2022
Modern Management Practices: Airline and Book Models
October 28, 2022
I know zero about running an airline. Wait. That’s not true. I know these outfits struggle to leave on time and handle baggage. I have heard that the computer systems used by US carriers are similar to those in use at the Internal Revenue Service. End of my info.
I read “American Airlines is Trying to Stop a Popular iPhone App That’s Become a ‘Must Have’ For its Flight Attendants.” The story caught my attention because an iPhone app has become an object of attention at an outfit unable to do what people expect it to do. Please, reference my comment about flying on time and the suitcases.
One flight attendant said of the current situation affecting Sequence Decoder that they had “never seen a company go out of their way to make life harder for their workers.”
The operative phrase “never seen a company go out of their way to make life harder for their workers” is memorable.
I would suggest that there is another company with some management challenges. “Exclusive: Amazon’s Attrition Costs $8 Billion Annually According to Leaked Documents And It Gets Worse” reports:
Amazon churns through workers at an astonishing rate, well above industry averages.
The write up continues:
The paper, published in January of 2022, states that the prior year’s data “indicates regretted attrition [represents] a low of 69.5% to a high of 81.3% across all levels (Tier 1 through Level 10 employees) suggesting a distinct retention issue.
Two big companies. Neither seems to be able to get in sync with their employees.
Let’s step back. I have a general sense that a number of organizations are unable to manage what I would call the basics; that is, understanding what employees need to do their jobs. On one hand, a software app which appears to improve scheduling strikes me as useful. Obviously the airline’s managers are terrified of software developed by an outsider and embraced by employees. The solution is to cancel it. Isn’t that a disconnect by what I assume are GenX and Millennial managers? Could dinobaby managers help resolve the issue? Of course not! Dinobabies, my goodness, no.
I have found that the online bookstore is less and less able to deliver “next day.” But I am a sample of one. The write up makes clear that one possible reason for the slippage and some of the practices of third party resellers are facilitated is due to — you guessed it — management failures. If the company were in touch with their employees, why have churn rates that are in the ballpark for streaming services consuming billions of dollars? In my opinion, we have an example of management taking a selfie and falling into a ravine.
Observations:
- Managers have to manage and deliver success. Ignoring employee needs is a questionable approach.
- Senior managers have to provide a framework for success. Exhibiting failure at scale suggests that these professionals are not managing in an effective manner from the point of view of the employees.
- Boards of directors have to provide a framework for the policies of the company. The incidents described in airline and bookstore cases suggests that these individuals are like vacationers: Kick back and enjoy the time.
My hunch is that remediating these issues will require more than attitude adjustment, a couple of TED Talks, and new technology. In fact, fixing the issues creating these two referenced case examples may be a job for the reprehensible dinobabies and their pre historic methods. Is this a popular notion? Nope.
Stephen E Arnold, October 28, 2022
OpenAI and The Evolution of Academic Cheating
October 28, 2022
Once considered too dangerous for public release, OpenAI’s text generator first ventured forth as a private beta. Now a version called Playground is available to everyone and is even free for the first three months (or the first 1,200,00 characters, whichever comes first). Leave it to the free market to breeze past considerations of misuse. We learn from Vice Motherboard that one key concern has materialized: “Students Are Using AI to Write Their Papers, Because Of Course They Are.” It did not take students long to realize this cheat slips right past plagiarism detecting software—because it is not technically plagiarism. Reporter Claire Woodcock writes:
“George Veletsianos, Canada Research Chair in Innovative Learning & Technology and associate professor at Royal Roads University says this is because the text generated by systems like OpenAI API are technically original outputs that are generated within a black box algorithm. ‘[The text] is not copied from somewhere else, it’s produced by a machine, so plagiarism checking software is not going to be able to detect it and it’s not able to pick it up because the text wasn’t copied from anywhere else,’ Veletsianos told Motherboard. ‘Without knowing how all these other plagiarism checking tools quite work and how they might be developed in the future, I don’t think that AI text can be detectable in that way.’ It’s unclear whether the companies behind the AI tools have the ability to detect or prevent students from using them to do their homework. OpenAI did not comment in time for publication.”
It was inevitable, really. One writing instructor quoted in the story recognizes today’s students can easily accumulate more knowledge than ever before. However, he laments losing the valuable process of gaining that knowledge through exploration if writing assignments become moot. The tutor has a point, but there is likely no turning back now. Perhaps there is a silver lining: academic institutions may finally be forced to teach like they exist in the 21st century. Students are already there. One cited only as innovate_rye states:
“I still do my homework on things I need to learn to pass, I just use AI to handle the things I don’t want to do or find meaningless. If AI is able to do my homework right now, what will the future look like? These questions excite me.”
That is one way to look at it. Perhaps the spirit of exploration is not dead, but rather evolving. The leaders of tomorrow will be pace setters.
Cynthia Murrell, October 28, 2022
From Our Pipe Dream Department: Harmful AI Must Pay Victims!
October 28, 2022
It looks like the European Commission is taking the potential for algorithms to cause harm seriously. The Register reports, “Europe Just Might Make it Easier for People to Sue for Damage Caused by AI Tech.” Vice-president for values and transparency V?ra Jourová frames the measure as a way to foster trust in AI technologies. Apparently EU officials believe technical innovation is helped when the public knows appropriate guardrails are in place. What an interesting perspective. Writer Katyanna Quach describes:
“The proposed AI Liability Directive aims to do a few things. One main goal is updating product liability laws so that they effectively cover machine-learning systems and lower the burden-of-proof for a compensation claimant. This ought to make it easier for people to claim compensation, provided they can prove damage was done and that it’s likely a trained model was to blame. This means someone could, for instance, claim compensation if they believe they’ve been discriminated against by AI-powered recruitment software. The directive opens the door to claims for compensation following privacy blunders and damage caused by poor safety in the context of an AI system gone wrong. Another main aim is to give people the right to demand from organizations details of their use of artificial intelligence to aid compensation claims. That said, businesses can provide proof that no harm was done by an AI and can argue against giving away sensitive information, such as trade secrets. The directive is also supposed to give companies a clear understanding and guarantee of what the rules around AI liability are.”
Officials hope such clarity will encourage developers to move forward with AI technologies without the fear of being blindsided by unforeseen allegations. Another goal is to build the current patchwork of AI standards and legislation across Europe into a cohesive set of rules. Commissioner for Justice Didier Reynders declares citizen protection top priority, stating, “technologies like drones or delivery services operated by AI can only work when consumers feel safe and protected.” Really? I’d like to see US officials tell that to Amazon.
Cynthia Murrell, October 28, 2022
What Is Better Than Biometrics Emotion Analysis of Surveillance Videos?
October 27, 2022
Many years ago, my team worked on a project to parse messages, determine if a text message was positive or negative, and flag the negative ones. Then of those negative messages, our job was to rank the negative messages in a league table. The team involved professionals in my lab in rural Kentucky, some whiz kids in big universities, a handful of academic experts, and some memorable wizards located offshore. (I have some memories, but, alas, these are not suitable for this write up.)
We used the most recent mechanisms to fiddle information from humanoid outputs. Despite the age of some numerical recipes, we used the latest and greatest. What surprised everyone is that our approach worked, particularly for the league table of the most negative messages. After reviewing our data, we formulated a simple, speedy way to pinpoint the messages which required immediate inspection by a person.
What was our solution for the deployable system?
Did we rely on natural language processing? Nope.
Did we rely on good old Reverend Bayes? Nope.
Did we rely on statistical analysis? Nope.
How did we do this? (Now keep in mind this was more than 15 years ago.)
We used a look up table of keywords.
Why? It delivered the league table of the most negative messages more than 85 percent of the time. The lookups were orders of magnitude faster than the fancy numerical recipes. The system was explainable. The method was extensible to second order negative messages with synonym expansion and, in effect, a second pass on the non-really negative messages. Yep, we crept into the 90 percent range.
I thought about this work for a company which went the way of most lavishly funded wild and crazy start ups from the go to years when I read “U.K. Watchdog Issues First of Its Kind Warning Against ‘Immature’ Emotional Analysis Tech.” This article addresses fancy methods for parsing images and other content to determine if a person is happy or sad. In reality, the purpose of these systems for some professional groups is to identify a potential bad actor before that individual creates content for the “if it bleeds, it leads” new organizations.
The article states:
The Information Commissioner’s Office, Britain’s top privacy watchdog, issued a searing warning to companies against using so-called “emotional analysis” tech, arguing it’s still “immature” and that the risks associated with it far outweigh any potential benefits.
You should read the full article to get the juicy details. Remember the text approach required one level of technology. We used a look up table because the magical methods were too expensive and too time consuming when measured against what was needed: Reasonable accuracy.
Taking videos and images, processing them, and determining if the individual in the image is a good actor or a bad actor, a happy actor or a sad actor, a nut job actor or a relative of Mother Teresa’s is another kettle of code.
Let’s go back to the question which is the title of this blog post: What Is Better Than Biometrics Emotion Analysis?
The answer is objective data about the clicks, dwell time, and types of indexed content an individual consumes. Lots of clicks translates to a signal of interest. Dwell time indicates attention. Cross correlate these data with other available information from primary sources and one can pinpoint some factoids that are useful in “knowing” about an individual.
My interest in the article was not the source article’s reminder that expectations for a technology are usually over inflated. My reaction was, “Imagine how useful TikTok data would be in identify individuals with specific predilections, mood changes plotted over time, and high value signals about an individual’s interests.”
Yep, just a reminder that TikTok is in a much better place when it comes to individual analysis than relying on some complicated methods which don’t work very well.
Practical is better.
Stephen E Arnold, October 27, 2022
Google: Beavering Away on Trust, Privacy, and Security
October 27, 2022
Google and trust: What an interesting pair of words. I wonder if anyone remembers the Google Search Appliance and its “phone home” function. I sure do because I was paid to go to a government meeting at one of the Executive Branch agencies so I could intermediate with my contacts in the search appliance mini-unit. I want to point out that customer support, technical support, and access to specific details of the operation of the Google Search Appliance were not easy for licensees to access. Hence, a dinobaby like myself was enlisted for the job. What was the reason for the concern? The GSA worked but the government technology folks were interested in the “phone home” function; specifically, what was available to the GOOG, what was transmitted, and who had access to those data from the government agency?
What do you think the Googler on the call with me in a conference room stuffed with government professionals said? As I recall, the Googler called me on my mobile and I stepped out of the room. The Googler said, “Ask them if the shipping crate was available?” I said, “Okay?” and returned to the room. The Googler popped back into the conference line and said, “Steve, do you have a question?” I turned to the group in the room and asked, “Is the shipping crate in the store room?”
The team leader’s answer was, “Yes.”
The Googler then said, “Steve, would you ask the client to ship the GSA back to us to check?”
The Googler disconnected. I organized the return. The senior government executive later asked me, “Do you trust that outfit?”
My answer was, “I do.”
The government executive said, “I don’t.”
Ah, a different opinion. As a result of the “phone home” feature of the cheerful yellow GSAs Google made a business decision and abandoned what it delightfully called “enterprise search.”
I thought about this meeting from years ago when I read “Court Documents Allege Google cultivated Privacy Misconceptions of Chrome’s Incognito Mode.” Was I surprised? Nope. Google is loved by people who like free services. Critical thinking about the data gathered by the online ad agency has not been a widespread practice for decades.
Why now?
My hunch is that partial understanding of what the Google datasphere has become is now coming into focus. The response of Silicon Valley “real” news outfits is amusing. From cheerleaders to aggrieved info-addicts is interesting.
The cited article states:
Google faces a potential privacy case as a class of millions of users filed to sue it for billions of dollars over Chrome’s Incognito mode lack of genuine privacy protections. While user ignorance is never a great argument in front of a judge, court documents first filed in March of 2021 paint a picture that Google has been complicit in cultivating user misconceptions on privacy. According to the filings, Google Marketing Chief Lorraine Twohill emailed CEO Sundar Pichai last year, warning that they need to consider making Incognito “truly private.” Even more concerning is her indirect admission that they have had to use misleading language when marketing the feature.
That’s the Google game plan.
Like many game plans, other teams figure out how to thwart what once was quite effective. Heck. In the case of the GOOG, the game plan won the equivalent of 20 or more World Cups. Now, however, the play book and its simple methods of saying one thing and doing another, apologizing and moving forward anyway, and paying trivial fines and taking advantage of advertisers and users has to be fluffed up.
Are the broad outlines of the new playbook discernable? I keep track of some of the changes:
- Distraction
- Shuffling product and service offerings
- Acquisitions which are not technology but consulting
- Continuous interactions with lobbyists and other contacts in Washington, DC, London, France, and Paris, France, among other locations
- Low profile but significant efforts to keep the online ad company’s India activities out of the news spotlights
- Hand waving about new policies in order to put some moats around certain skyrocketing operational costs because…. plumbing is expensive even for the GOOG.
What’s the outcome in my opinion? (Don’t want it. Just stop reading.) My view is that Google’s management methods will continue to show signs of fragility. Maybe some big cracks will emerge? Lawyering and marketing will kick “real” engineers off the fast track to bonuses. Yikes. The Google is a changin’… and fast. Example: Incognito which isn’t incog or neat-o.
Stephen E Arnold, October 2022