RightHub: Will It Supercharge IP Protection and Violation Trolls?

March 16, 2023

Yahoo believe it or not displayed an article I found interesting. The title was “Copy That: RightHub Wants To Be the Command Center for Intellectual Property Management.” The story originated on a Silicon Valley “real news” site called TechCrunch.

The write up explains that managing patent, trademark, and copyright information is a hassle. RightHub is, according to the story:

…something akin to what GoDaddy promises in the world of website creation, insofar as GoDaddy allows anyone to search, register, and renew domain names, with additional tools for building and hosting websites.

I am not sure that a domain-name type of model is going to have the professional, high-brow machinery that rights-sensitive outfits expect. I am not sure that many people understand that the domain-name model is fraught with manipulated expiry dates, wheeling and dealing, and possibly good old-fashioned fraud.

The idea of using a database and scripts to keep track of intellectual property is interesting. Tools are available to automate many of the discrete steps required to file, follow up, renew, and remember who did what and when.

But domain name processes as a touchstone.

Sorry. I think that the service will embrace a number of sub functions which may be of interest to some people; for example, enforcement trolls. Many are using manual or outmoded tools like decades old image recognition technology and partial Web content scanning methods. If RightHub offers a robust system, IP protection may become easier. Some trolls will be among the first to seek inspiration and possibly opportunities to be more troll-like.

Stephen E Arnold, March 16, 2023

The Confluence: Big Tech, Lobbyists, and the US Government

March 13, 2023

I read “Biden Admin’s Cloud Security Problem: It Could Take Down the Internet Like a Stack of Dominos.” I was thinking that the take down might be more like the collapses of outfits like Silicon Valley Bank.

I noted this statement about the US government, which is

embarking on the nation’s first comprehensive plan to regulate the security practices of cloud providers like Amazon, Microsoft, Google and Oracle, whose servers provide data storage and computing power for customers ranging from mom-and-pop businesses to the Pentagon and CIA.

Several observations:

  1. Lobbyists have worked to make it easy for cloud providers and big technology companies to generate revenue is an unregulated environment.
  2. Government officials have responded with inaction and spins through the revolving door. A regulator or elected official today becomes tomorrow’s technology decision maker and then back again.
  3. The companies themselves have figured out how to use their money and armies of attorneys to do what is best for the companies paying them.

What’s the consequence? Wonderful wordsmithing is one consequence. The problem is that now there are Mauna Loas burbling in different places.

Three of them are evident: The fragility of Silicon Valley approach to innovation. That’s reactive and imitative at this time. The second issue is the complexity of the three body problem resulting from lobbyists, government methods, and monopolistic behaviors. Commercial enterprises have become familiar with the practice of putting their thumbs on the scale. Who will notice?

What will happen? The possible answers are not comforting. Waving a magic wand and changing what are now institutional behaviors established over decades of handcrafting will be difficult.

I touch on a few of the consequences in an upcoming lecture for the attendees at the 2023 National Cyber Crime Conference.

Stephen E Arnold, March 13, 2023

Why Governments and Others Outsource… Almost Everything

January 24, 2023

I read a very good essay called “Questions for a New Technology.” The core of the write up is a list of eight questions. Most of these are problems for full-time employees. Let me give you one example:

Are we clear on what new costs we are taking on with the new technology? (monitoring, training, cognitive load, etc)

The challenge strike me as the phrase “new technology.” By definition, most people in an organization will not know the details of the new technology. If a couple of people do, these individuals have to get the others up to speed. The other problem is that it is quite difficult for humans to look at a “new technology” and know about the knock on or downstream effects. A good example is the craziness of Facebook’s dating objective and how the system evolved into a mechanism for social revolution. What in-house group of workers can tackle problems like that once the method leaves the dorm room?

The other questions probe similarly difficult tasks.

But my point is that most governments do not rely on their full time employees to solve problems. Years ago I gave a lecture at Cebit about search. One person in the audience pointed out that in that individual’s EU agency, third parties were hired to analyze and help implement a solution. The same behavior popped up in Sweden, the US, and Canada and several other countries in which I worked prior to my retirement in 2013.

Three points:

  1. Full time employees recognize the impossibility of tackling fundamental questions and don’t really try
  2. The consultants retained to answer the questions or help answer the questions are not equipped to answer the questions either; they bill the client
  3. Fundamental questions are dodged by management methods like “let’s push decisions down” or “we decide in an organic manner.”

Doing homework and making informed decisions is hard. A reluctance to learn, evaluate risks, and implement in a thoughtful manner are uncomfortable for many people. The result is the dysfunction evident in airlines, government agencies, hospitals, education, and many other disciplines. Scientific research is often non reproducible. Is that a good thing? Yes, if one lacks expertise and does not want to accept responsibility.

Stephen E Arnold, January 25, 2023

College Student Builds App To Detect AI Written Essays: Will It Work? Sure

January 19, 2023

Artists are worried that AI algorithms will steal their jobs, but now writers are in the same boat because the same thing is happening to them! AI are now competent enough to write coherent text. Algorithms can now write simple conversations, short movie scripts, flash fiction, and even assist in the writing process. Students are also excited about the prospect of AI writing algorithms, because it means they can finally outsource their homework to computers. Or they could have done that until someone was genius enough to design an AI that detected AI-generated essays. Business Insider reports on how a college student is now the bane of the global student body: “A Princeton Student Built An App Which Can Detect If ChatGPT Wrote An Essay To Combat AI-Based Plagiarism.”

Princeton computer science major Edward Tian spent his winter holiday designing an algorithm to detect if an essay was written by the new AI writer ChatGPT. Dubbed GPTZero, Tian’s AI can correctly identify what is written by a human and what is not. GPTZero works by rating text on how perplexity, complex, and random it is written. GPTZero proved to be very popular and it crashed soon after its release. The app is now in a beta phase that people can sign-up for or they can use on Tian’s Streamlit page.

Tian’s desire to prevent AI plagiarism motivated him to design GPTZero:

“Tian, a former data journalist with the BBC, said that he was motivated to build GPTZero after seeing increased instances of AI plagiarism. ‘Are high school teachers going to want students using ChatGPT to write their history essays? Likely not,’ he tweeted.”

AI writing algorithms are still in their infancy like art generation AI. Writers should not fear job replacement yet. Artistic AI places the arts in the same place paintings were with photography, the radio with television, and libraries with the Internet. Artistic AI will change the mediums, but portions of it will persevere and others will change. AI should be used as tools to improve the process.

Students would never find and use a work-around.

Whitney Grace, January 19, 2023

FAA Software: Good Enough?

January 11, 2023

Is today’s software good enough. For many, the answer is, “Absolutely.” I read “The FAA Grounded Every Single Domestic Flight in the U.S. While It Fixed Its Computers.” The article states what many people in affected airports knows:

The FAA alerted the public to a problem with the system at 6:29 a.m. ET on Twitter and announced that it had grounded flights at 7:19 a.m. ET. While the agency didn’t provide details on what had gone wrong with the system, known as NOTAM, Reuters reported that it had apparently stopped processing updated information. As explained by the FAA, pilots use the NOTAM system before they take off to learn about “closed runways, equipment outages, and other potential hazards along a flight route or at a location that could affect the flight.” As of 8:05 a.m. ET, there were 3,578 delays within, out, and into the U.S., according to flight-tracking website FlightAware.

NOTAM, for those not into government speak, means “Notice to Air Missions.”

Let’s go back in history. In the 1990s I think I was on the Board of the National Technical Information Service. One of our meetings was in a facility shared with the FAA. I wanted to move my rental car from the direct sunlight to a portion of the parking lot which would be shaded. I left the NTIS meeting, moved my vehicle, and entered through a side door. Guess what? I still remember my surprise when I was not asked for my admission key card. The door just opened and I was in an area which housed some FAA computer systems. I opened one of those doors and poked my nose in and saw no one. I shut the door, made sure it was locked, and returned to the NTIS meeting.

I recall thinking, “I hope these folks do software better than they do security.”

Today’s (January 11, 2023) FAA story reminded me that security procedures provide a glimpse to such technical aspects of a government agency as software. I had an engagement for the blue chip consulting firm for which I worked in the 1970s and early 1980s to observe air traffic control procedures and systems at one of the busy US airports. I noticed that incoming aircraft were monitored by printing out tail numbers and details of the flight, using a rubber band to affix these data to wooden blocks which were stacked in a holder on the air traffic control tower’s wall. A controlled knew the next flight to handle by taking the bottom most block, using the data, and putting the unused block back in a box on a table near the bowl of antacid tablets.

I recall that discussions were held about upgrading certain US government systems; for example, the IRS and the FAA computer systems. I am not sure if these systems were upgraded. My hunch is that legacy machines are still chugging along in facilities which hopefully are more secure than the door to the building referenced above.

My point is that “good enough” or “close enough for government work” is not a new concept. Many administrations have tried to address legacy systems and their propensity to [a] fail like the Social Security Agency’s mainframe to Web system, [b] not work as advertised; that is, output data that just doesn’t jibe with other records of certain activities (sorry, I am not comfortable naming that agency), or [c] are unstable because either funds for training staff, money for qualified contractors, or investments in infrastructure to keep the as is systems working in an acceptable manner.

I think someone other than a 78 year old should be thinking about the issue of technology infrastructure that, like Southwest Airlines’ systems, or the FAA’s system does not fail.

Why are these core systems failing? Here’s my list of thoughts. Note: Some of these will make anyone between 45 and 23 unhappy. Here goes:

  1. The people running agencies and their technology units don’t know what to do
  2. The consultants hired to do the work agency personnel should do don’t deliver top quality work. The objective may be a scope change or a new contract, not a healthy system
  3. The programmers don’t know what to do with IBM-type mainframe systems or other legacy hardware. These are not zippy mobile phones which run apps. These are specialized systems whose quirks and characteristics often have to be learned with hands on interaction. YouTube videos or a TikTok instructional video won’t do the job.

Net net: Failures are baked into commercial and government systems. The simultaneous of several core systems will generate more than annoyed airline passengers. Time to shift from “good enough” to “do the job right the first time”. See. I told you I would annoy some people with my observations. Well, reality is different from thinking about smart software will write itself.

Stephen E Arnold, January 11, 2023

Insight about Software and Its Awfulness

January 10, 2023

Software is great, isn’t it? Try to do hanging indents with numbers in Microsoft Word. If you want this function without wasting time with illogical and downright weird controls, call a Microsoft Certified Professional to code what you need. Law firms are good customers. What about figuring out which control in BlackMagic DaVinci delivers the effect you want? No problem. Hire someone who specializes in the mysteries of this sort of free software. No expert in Princeton, Illinois, or Bear Dance, Montana? Do the Zoom thing with a gig worker. That’s efficient. There are other examples; for instance, do you want to put your MP3 on an iPhone? Yeah, no problem. Just ask a 13 year old. She may do the transfer for less than an Apple Genius.

Why is software awful?

There Is No Software Maintenance” takes a step toward explaining what’s going on and what’s going to get worse. A lot worse. The write up states:

Software maintenance is simply software development.

I think this means that a minimal viable product is forever. What changes are wrappers, tweaks, and new MVP functions. Yes, that’s user friendly.

The essay reports:

The developers working on the product stay with the same product. They see how it is used, and understand how it has evolved.

My experience suggests that the mindset apparent in this article is the new normal.

The advantages are faster and cheaper, quicker revenue, and a specific view of the customer as irrelevant even if he, she, or it pays money.

The downsides? I jotted down a few which occurred to me:

  1. Changes may or may not “work”; that is, printing is killed. So what? Just fix it later.
  2. Users’ needs are secondary to what the product wizards are going to do. Oh, well, let’s take a break and not worry about today. Let’s plan for new features for tomorrow. Software is a moving target for everyone now.
  3. Assumptions about who will stick around to work on a system or software are meaningless. Staff quit, staff are RIFed, and staff are just an entity on the other end of an email with a contract working in Bulgaria or Pakistan.

What’s being lost with this attitude or mental framing? How about trust, reliability, consistency, and stability?

Stephen E Arnold, January 10, 2023

Are Bad Actors Working for Thrills?

December 27, 2022

Nope, some bad actors may be forced to participate in online criminal behavior. Threats, intimidation, a beating or two, or worse can focus some people to do what is required.

The person trying to swindle you online might be doing so under duress. “Cyber Criminals Hold Asian Tech Workers Captive in Scam Factories,” reports Context. The article begins with the story of Stephen Wesley, an Indian engineer who thought he was taking a graphic design job in Thailand. Instead he found himself carted off to Myanmar, relieved of his passport and phone, and forced to work up to 18 hours a day perpetuating crypto currency scams. This went on for 45 days, until he and about 130 others were rescued from such operations by Indian authorities. Reporters Anuradha Nagaraj and Nanchanok Wongsamuth reveal:

“Thousands of people, many with tech skills, have been lured by social media advertisements promising well-paid jobs in Cambodia, Laos and Myanmar, only to find themselves forced to defraud strangers worldwide via the internet. … The cybercrime rings first emerged in Cambodia, but have since moved into other countries in the region and are targeting more tech-savvy workers, including from India and Malaysia. Authorities in these countries and United Nations officials have said they are run by Chinese gangsters who control gambling across southeast Asia and are making up for losses during the pandemic lockdowns. The experts say the trafficked captives are held in large compounds in converted casinos in Cambodia, and in special economic zones in Myanmar and Laos. ‘The gangs targeted skilled, tech-savvy workers who had lost jobs during the pandemic and were desperate, and fell for these bogus recruitment ads,’ said Phil Robertson, deputy director for Asia at Human Rights Watch. ‘Authorities have been slow to respond, and in many cases these people are not being treated as victims of trafficking, but as criminals because they were caught up in these scams.'”

A long-game tactic typically used by these outfits is eloquently named “pig butchering,” wherein the operator builds trust with each victim through fake profiles on social media, messaging apps, and dating apps. Once the mark is hooked, the involuntary con artist pressures them to invest in phony crypto or trading schemes. Beware virtual suitors bearing unique investment opportunities.

Sadly, recent tech layoffs are bound to accelerate this trend. Bad actors are not going to pass up a chance to get talent cheaply. Myanmar’s current government, which seized power in February 2021, declined to comment. After months of denying the problem existed, we are told, Cambodian officials are finally cracking down on these operations. The article states thousands of workers are still trapped.

Business is business as the saying goes.

Cynthia Murrell, December 27, 2022

Google and Its Puzzles: Insiders Only, Please

December 26, 2022

ProPublica made available an article of some importance in my opinion. “Porn, Piracy, Fraud: What Lurks Inside Google’s Black Box Ad Empire” walks through the intentional, quite specific engineering of its crucial advertising system to maximize revenue and befuddle (is “defraud” a synonym?) advertisers. I was asked more than a decade ago to do a presentation of my team’s research into Google’s advertising methodology. I declined. At that time, I was doing some consulting work for a company I am not permitted to name. That contract stipulated that I would not talk about a certain firm’s business technologies. I signed because… money.

The ProPublica essay does the revealing about what is presented as a duplicitous, underhanded, and probably illegal business process subsystem. I don’t have to present any of the information I have gathered over the years. I can cite this important article and point out several rocks which the capable writers at ProPublica either did not notice or flipped them over and concluded, “Nah, nothing to see here.”

I urge you to do two things. First, read the ProPublica write up. Number Two: Print it out. My hunch is that it may be disappeared or become quite difficult to find at some point in the future. Why? Ah, grasshopper, that is a question easily answered by the managers who set up Foundem and who were stomped by Googzilla. Alternatively you could chase down a person at the French government tax authority and ask, “Why were French tax forms not findable via a Google search for several years.” These individuals might have the information you need. Shifting gears: Ask Magix, the software company responsible for Sony Vegas why cracks for the software appear in YouTube videos. If you use your imagination, you will come up with ideas for gathering first person information about the lovable online advertising company’s systems and methods. Hint: Look up Dr. Timnit Gebru and inquire about her interactions with one of Google chief scientists. I guarantee that a useful anecdote will bubble up.

So what’s in the write up. Let me highlight a main point and then cite a handful of interesting statements in the article.

What is the main point? In my opinion, ProPublica’s write up says, “The GOOG maximizes its return at the expense of the advertisers and of the users.”

Who knew? Not me. I think the Alphabet Google YouTube DeepMind outfit is the most wonderfulest company in the world. Remember: You heard this here first. I have a priceless Google mouse pad too.

Consider these three statements from the essay. First, Google lingo is interesting:

Google spokesperson Michael Aciman said the company uses a combination of human oversight, automation and self-serve tools to protect ad buyers and said publisher confidentiality is not associated with abuse or low quality.

The idea is that Google is interested in using a hybrid method to protect ad buyers. Plus there is a difference between publishers and confidentiality. I find it interesting that instead of talking about [a] the ads themselves (porn, drugs, etc.), [b] the buyers of advertising which is a distinct industry dependent upon Google for revenue, [c] the companies who want to get their message in front of people allegedly interested in the product of service, or [d] the user of search or some other Google service. Google wants to “protect ad buyers.” And what about the others I have identified? Google doesn’t care. Logical sure but doesn’t Google have the other entities in mind? That’s a question regulators should have asked and had answered after Google settle the litigation with Yahoo over advertising technology, at the time of Google’s acquisition of Oingo (Applied Semantics), or at the time Google acquired DoubleClick. In my opinion, much of the ProPublica write up operates in a neverland of weird Google speak, not the reality of harvesting money from those largely in the dark about what’s happening in the business processes.

Second, consider this statement:

we matched 70% of the accounts in Google’s ad sellers list to one or more domains or apps, more than any dataset ProPublica is aware of. But we couldn’t find all of Google’s publisher partners. What we did find was a system so large, secretive and bafflingly complex that it proved impossible to uncover everyone Google works with and where it’s sending advertisers’ money.

The passage seems to suggest that Google’s engineers went beyond clever and ventured into the murky acreage of intentional obfuscation. It seems as if Google wanted to be able to consume advertising budgets without any entity having the ability to determine [a] if the ad were displayed in a suitable context; that is, did the advertiser’s message match the needs of the user to who the ad was shown.  And [b] was the ad appropriate even if it contained words and phrases on Google’s unofficial stop word lists. (If you have not see these, send an email to benkent2020 at yahoo dot com and one of my team will email you some of the more interesting words that guarantee Google’s somewhat lax processes will definitely try to block. If a word is not on a Google stop list, then the messages will probably be displayed. Remember: As Google terminates six percent of its staff, some of those humans presumably will not be able to review ads per item one above. And [c] note the word “bafflingly”. The focus of much Google engineering over the last 15 years has been to build competitive barriers, extent the monopoly function with “partners”, and double talk in order to keep regulators and curious Congressional people away. That’s my take on  this passage.

Now for the third passage I will cite:

…we uncovered scores of previously unreported peddlers of pirated content, porn and fake audiences that take advantage of Google’s lax oversight to rake in revenue.

I don’t need to say much more about this statement that look at and think about pirated content (copyright), porn (illegal content in some jurisdictions) and fake audiences (cyber fraud). Does this statement suggest that Google is a criminal enterprise? That’s a good question.

I have some high level observations about this excellent article in ProPublica. I offer these in the hope that ProPublica will explore some of these topics or an enterprising graduate student will consider the statements and do some digging.

  1. Why is Google unable to manage its staff? This is an important question because the ad behaviors described in the ProPublica article are the result of executive compensation plans and incentives. Are employees rewarded for implementing operations that further “soft” fraud or worse?
  2. How will Google operate in a more fragmented, more regulated environment? Is one possible behavior a refusal to modify the guiding hand of compensation and incentive programs away from generating more and more money within external constraints? My hunch is that Google will do whatever is necessary to build its revenue.
  3. What mechanisms exist or will be implemented to keep Google’s automated systems operating in a legal, ethical way?

Net net: Finally, after decades of craziness about how wonderful Googzilla is, more critical research is appearing. Is it too little and too late? In my view, yes.

Stephen E Arnold, December 26, 2022

Microsoft Software Quality: Word Might Stop Working. No Big Deal

December 20, 2022

I read a short item which underscores my doubts about Microsoft’s quality methods. l have questions about security issues in Microsoft’s enterprise and cloud products and services. But those are mostly “new” and the Big Hope for future revenues. Perhaps games will arrive to make the Softies buy Teslas and beef up their retirement accounts, just not yet.

Microsoft Confirms Taskbar Bugs, Broken File Explorer, and App Issues in Windows 10” reports:

If you use Windows 10, you might experience the following symptoms:

  • ?The Weather or News and Interests widget or icons flickers on the Windows taskbar
  • ?The Windows taskbar stops responding
  • ?Windows Explorer stops responding
  • ?Applications including Microsoft Word or Excel might stop responding if they are open when the issue occurs

The weather and news are no big loss in my opinion. Microsoft believes that Windows 10 users want weather and news despite the mobile phone revolution. (Remember Microsoft and its play to create a mobile phone? Yeah, that was spun as fail early and fail fast. I think of that initiative as a basic fail, not a fast or early fail. Plain old fail.)

The Taskbar and file manager are slightly more interesting. A number of routine functions go south for some lucky Windows 10 users.

But the zinger fail is that Microsoft Word or Excel die. Now that’s just what’s needed to make the day of a person who is working on a report at a so-so consulting firm like one of the blue-chip outfits in Manhattan, a newbie at a big law firm with former government officials waiting for the worker bees to deliver a document for the bushy eyebrow set to review, or a Wall Street type modifying a model to make his, her, thems partners lots of money.

These happy users are supposed to be able to handle stress and pressure.

I wonder if Microsoft executives have been in a consulting firm, law firm, or financial services company when a must have app stops responding. Probably not because these wizards are working on improving Microsoft’s quality control processes. Could Redmond’s approach to quality be blamed on an intern, a contractor, or a part time worker? My hunch is that getting blamed is not a component of the top dogs’ job description.

Stephen E Arnold, December 20, 2022

Elephants Recognize One Another and When They Stomp Around, Grass Gets Trampled

December 1, 2022

I find the coverage of the Twitter, Apple, and Facebook hoe down a good example of self serving and possibly dysfunctional behavior.

What caught my attention in the midst of news about a Tim Apple and the Musker was this story “Zuckerberg Says Apple’s Policies Not Sustainable.” The write up reports as actual factual:

Meta CEO Mark Zuckerberg on Wednesday (November 30, 2022) added to the growing chorus of concerns about Apple, arguing that it’s “problematic that one company controls what happens on the device.” … Zuckerberg has been one of the loudest critics of Apple in Silicon Valley for the past two years. In the wake of Elon Musk’s attacks on Apple this week (third week of November 2022) , his concerns are being echoed more broadly by other industry leaders and Republican lawmakers….”I think the problem is that you get into it with the platform control, is that Apple obviously has their own interests…

Ah, Facebook with its interesting financial performance partially a result of Apple’s unilateral actions is probably not an objective observer. What about the Facebook Cambridge Analytic matter? Ancient history.

Much criticism is directed at the elected officials in the European Union for questioning the business methods of American companies. The interaction of Apple, Facebook, and Twitter will draw more attention to the management methods, the business procedures, and the motivation behind some words and deeds.

If I step back from the flood of tweets, Silicon Valley “real” news, and oracular (possibly self congratulatory write ups from conference organizers) what do I see:

  1. Activities illustrating what happens in a Wild West business environment
  2. Personalities looming larger than the ethical issues intertwined with their revenue generation methods
  3. Regulatory authorities’ inaction creating genuine concern among users, business partners, and employees.

Elephants can stomp around. Even when the beasts mean well, their sheer size puts smaller entities at risk. The shenanigans of big creatures are interesting. Are these creatures of magnitude sustainable or a positive for the datasphere? My view? Nope.

Stephen E Arnold, December 1, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta