Technical Debt and Technical Wealth

August 29, 2016

I read “Forget Technical Debt. Here’s How to Build Technical Wealth.” Lemons? Make lemonade. Works almost every time.

The write up begins with a reminder that recent code which is tough to improve is a version of legacy code. I understand. I highlighted this statement:

Legacy code isn’t a technical problem. It’s a communication problem.

I am not sure I understand. But let’s move forward in the write up. I noted this statement:

“It’s the law that says your codebase will mirror the communication structures across your organization. If you want to fix your legacy code, you can’t do it without also addressing operations, too. That’s the missing link that so many people miss.”—Andrea Goulet, CEO of Corgibytes

So what’s the fix for legacy code an an outfit like Delta Airlines or the US air traffic control system or the US Internal Revenue Service or a Web site crafted in 1995?

I highlighted this advice:

Forget debt, build technical wealth.

Very MBA-ish. I trust MBAs. Heck, I have affection for some, well, one or two. The mental orientation struck me as quite Wordsworthian:

Stop thinking about your software as a project. Start thinking about it as a house you will live in for a long time…

Just like with a house, modernization and upkeep happens in two ways: small, superficial changes (“I bought a new rug!”) and big, costly investments that will pay off over time (“I guess we’ll replace the plumbing…”). You have to think about both to keep your product current and your team running smoothly. This also requires budgeting ahead — if you don’t, those bigger purchases are going to hurt. Regular upkeep is the expected cost of home ownership. Shockingly, many companies don’t anticipate maintenance as the cost of doing business.

Okay, let’s think about legacy code in something like a “typical” airline or a “typical” agency of the US Executive Branch. Efforts have been made over the last 20 years to improve the systems. Yet these outfits, like many commercial enterprises, are a digital Joseph’s coat of many systems, software, hardware, systems, and methods. The idea is to keep the IRS up and running; that is, good enough to remain dry when it rains and pours.

There is, in my opinion, not enough money to “fix” the IRS systems. If there were money, the problem of code written by many hands over many years is intractable. The idea for “menders” is a good one. But where does one find enough menders to remediate the systems at a big outfit.

Google’s approach is to minimize “legacy” code in some situations. See “Google Is in a Vicious Build Retire Cycle.”

The MBA charts, graphs and checklists do not deliver wealth. The approach sidesteps a very important fact. There are legacy systems which, if they crash, are increasingly difficult to get back up and running. The thought of remediating the systems coded by folks long since retired or deceased is something that few people, including me, have a desire to contemplate. Legacy code is a problem, and there is no quick, easy, or business school thinking fix I know about.

Maybe somewhere? Maybe someplace? Just not in Harrod’s Creek.

Stephen E Arnold, August 29, 2016

Google Offers Free Cloud Access to Colleges

August 29, 2016

Think Amazon is the only outfit which understands the concept of strategic pricing, bundling, and free services? Google has decided to emulate such notable marketing outfits as ReedElsevier’s LexisNexis and offering colleges a real deal for use of for-fee online services. Who would have thought that Google would emulate LexisNexis’ law school strategy?

I read “Google Offers Free Cloud Access to Colleges, Plays Catch Up to Amazon, Microsoft.” I reported that a mid tier consulting firm anointed Microsoft as the Big Dog in cloud computing. Even in Harrod’s Creek, folks know that Amazon is at least in the cloud computing kennel with the Softies.

According to the write up:

Google in June announced an education grant offering free credits for its cloud platform, with no credit card required, unlimited access to its suite of tools and training resources. Amazon and Microsoft’s cloud services both offer education programs, and now Google Cloud wants a part in shaping future computer scientists — and probably whatever they come up with using the tool.

The write up points out:

Amazon and Microsoft’s cloud services offer an education partnership in free trials or discounted pricing. For the time being, Microsoft Azure’s education program is not taking new applications and “oversubscribed,” the website reads. Amazon Web Services has an online application for its education program for teachers and students to get accounts, and Google is accepting applications from faculty members.

How does one avail oneself of these free services. Sign up for a class and hope that your course “Big Band Music from the 1940’s” qualifies you for free cloud stuff.

Stephen E Arnold, August 29, 2016

Facebook Ad Targeting Revealed

August 29, 2016

A scoop maybe. Navigate to “98 Personal Data Points That Facebook Uses to Target Ads to You.” The list-tickle becomes news because real newspapers report real news. For the full list, visit the estimable Washington Bezos. Sorry, Washington Post.

Here are some signals I found amusing:

  • How much money user is likely to spend on next car. Doesn’t that depend on fashion, the deal, or what my spouse wants to drive?
  • Users who have created a Facebook event. I don’t know what a Facebook “event” is.
  • Users who investor (divided by investment type). For a real journalism outfit, I am puzzled by the phrase “who investor”.
  • Types of clothing user’s household buys. Another grammatical gem.
  • Users who are “heavy” buyers of beer, wine or spirits. I assume “heavy” means obese. Perhaps I am incorrect.
  • Users who are interested in the Olympics, fall football, cricket or Ramadan. What about other sports like Ramadan?

All in all, a fine list. An ever more better finest scrumptious article from a real journalistic outfit, the Washington Bezos. Darn, there I go again. I mean the Washington Post.

Stephen E Arnold, August 29, 2016

Faster Text Classification from Facebook, the Social Outfit

August 29, 2016

I read “Faster, Better Text Classification.” Facebook’s artificial intelligence team has made available some of its whizzy code. The software may be a bit of a challenge to the vendors of proprietary text classification software, but Facebook wants to help everyone. Think of the billion plus Facebook users who need to train an artificially intelligent system with one billion words in 10 minutes. You may want to try this on your Chromebook, gentle reader.

I learned:

Automatic text processing forms a key part of the day-to-day interaction with your computer; it’s a critical component of everything from web search and content ranking to spam filtering, and when it works well, it’s completely invisible to you. With the growing amount of online data, there is a need for more flexible tools to better understand the content of very large datasets, in order to provide more accurate classification results. To address this need, the Facebook AI Research (FAIR) lab is open-sourcing fastText, a library designed to help build scalable solutions for text representation and classification.

What does the Facebook text classification code deliver as open sourciness? I learned:

FastText combines some of the most successful concepts introduced by the natural language processing and machine learning communities in the last few decades. These include representing sentences with bag of words and bag of n-grams, as well as using subword information, and sharing information across classes through a hidden representation. We also employ a hierarchical softmax that takes advantage of the unbalanced distribution of the classes to speed up computation. These different concepts are being used for two different tasks: efficient text classification and learning word vector representations.

The write up details some of the benefits of the code; for example, its multilingual capabilities and its accuracy.

What will other do gooders like Amazon, Google, and Microsoft do to respond to Facebook’s generosity? My thought is that more text processing software will find its way to open source green pastures.

What will the for fee vendors peddling proprietary classification systems do? Here’s a short list of ideas I had:

  1. Pivot to become predictive analytics companies and seek new rounds of financing
  2. Pretend that open source options are available but not good enough for real world tasks
  3. Generate white papers and commission mid tier consulting firms to extol the virtues of their innovative, unique, high speed, smart software
  4. Look for another line of work in search engine optimization, direct sales for a tool and die company, or check out Facebook.

Stephen E Arnold, August 29, 2016

Defining AI, Machine Learning, and Deep Learning

August 28, 2016

Confused about the jargon marketing professionals hose at you? No need. Navigate to “AI vs Deep Learning vs Machine Learning.” The truth is revealed. Here’s what my take on the definitions is:

  • Artificial intelligence is an umbrella term. One can use it for almost any sales pitch.
  • Deep learning. This is pattern recognition with human inputs.
  • Machine learning is pretty much like deep learning.

There are some other concepts may be found in search and content processing vendors’ slideshows, sale pitches, and marketing collateral; for example:

  • Cognitive computing
  • Semantics
  • Natural language processing.

What do these terms mean? I have no idea. I understand counting entities and using methods to perform query expansion. On a good day, I can name a couple of ways to perform clustering.

This buzzword blizzard just confuses me. Most Star Trek systems require rules and human crafted training. Then every once in a while one has to retrain the smart software. Progress in marketing is outpacing progress in some of the technology described by marketers.

Stephen E Arnold, August 28, 2016

Alphabet Google and the Gmail Ad Matter

August 27, 2016

Did you know that the Alphabet Google thing manages or provides email for about one billion users. No that’s not a record, search has that many “prospects” for advertisers.

I noted this story: “Google Faces Legal Action over Data Mining Emails.” In theory, humans at the Alphabet Google thing do not read one’s emails. I know that when I sent an email to a Googler, that person did not read the email. So there, doubting Tabithas and Tommies.

I learned from the write up, which I am confident is as valid as any other Internet news item:

… the US District Court for the Northern District of California issued an order denying Google’s motion to dismiss a lawsuit brought by plaintiff Daniel Matera which alleged that Google violated federal and state wiretapping laws in its operation of Gmail. The Wiretap Act prohibits the interception of wire, oral and electronic communications.

I circled this passage as well:

In this latest twist, Judge Koh found Google’s policy of intercepting and scanning emails before they reach the inbox of the intended recipient may violate the California Wiretap Act and denied Google’s motion to dismiss Matera’s lawsuit. Matera is not a Google customer but claims that the “ubiquity of the email service” means that Google has still intercepted, scanned and analyzed his and many others’ emails [Matera] seeks to represent non-Gmail users “who have never established an email account with Google, and who have sent emails to or received emails from individuals with Google email accounts.”

The  Alphabet Google thing is certainly in the midst of a number of legal hassles. We love Google and its relevant search results. I have concluded that there are some folks who cannot hop on the Alphabet Google bandwagon. Cue up John Phillips Sousa remix, “The GOOG and Alphabet Forever.”

Stephen E Arnold, August 27, 2016

Libraries Continue to Stay Awesome by Renting Out the Internet

August 26, 2016

The article on The Seattle Public Library titled SPL HOTSPOT offers library patrons a great option for “checking out” a mobile hotspot for up to 21 days for free with a valid library card. This is an excellent service for those of us without reliable Internet (thanks, Time Warner Cable) or who are travelling within the United States. More than anything, though, this service provides low-income Internet access. The article explains,

“The SPL HotSpot is an easy-to-use, mobile hotspot that keeps your tablet, laptop and other Wi-Fi–enabled devices connected to the Internet.

You can connect up to 15 devices to 4G LTE and 3G networks, and also charge external devices… You can return the hotspot to any Library location or book drop, just like other items. You must return the device with all the original packaging and accessories. Please fully charge the battery before you return the device.”

There are a few drawbacks: there is a $199 fine is the device is not returned on time, and according to user responses, the wait time is current up to 2 months. But due to the Internet monopolies by massive corporations, the cost of access is increasing; while at the same time so is our collective dependence on the Internet. Can you imagine going even a day without having it available? This is an invaluable service that will hopefully catch on elsewhere!

Chelsea Kerwin, August 26, 2016

Machine Learning Search Algorithms Reflect Female Stereotypes

August 26, 2016

The article on MediaPost titled Are Machine Learning Search Algorithms To Blame for Stereotypes? poses a somewhat misleading question about the nature of search algorithms such as Google and Bing in the area of prejudice and bias. Ultimately they are not the root, but rather a reflection on their creators. Looking at the images that are returned when searching for “beautiful” and “ugly” women, researchers found the following.

“In the United States, searches for “beautiful” women return pictures that are 80% white, mostly between the ages of 19 and 28. Searches for “ugly” women return images of those about 60% white and 20% black between the ages of 30 to 50. Researchers admit they are not sure of the reason for the bias, but conclude that they may stem from a combination of available stock photos and characteristics of the indexing and ranking algorithms of the search engines.”

While it might be appealing to think that machine learning search algorithms have somehow magically fallen in line with the stereotypes of the human race, obviously they are simply regurgitating the bias of the data. Or alternately, perhaps they learn prejudice from the humans selecting and tuning the algorithms. At any rate, it is an unfortunate record of the harmful attitudes and racial bias of our time.

Chelsea Kerwin, August 26, 2016

Smart Software Pitfalls: A List-Tickle

August 26, 2016

Need page views? Why not try a listicle or, as we say here in Harrod’s Creek, a “list-tickle.”

In order to understand the depth of thought behind “13 Ways Machine Learning Can Steer You Wrong,” one must click 13 times. I wonder if the managers responsible for this PowerPoint approach to analysis handed in their college work on 5X8 inch note cards and required that the teacher ask for each individually.

What are the ways machine learning can steer one into a ditch? As Ms. Browning said in a single poem on one sheet of paper, “Let me count the ways.”

  1. The predictions output by the Fancy Dan system are incorrect. Fancy that.
  2. One does not know what one does not know. This reminds me of a Donald Henry Rumsfeld koan. I love it when real journalists channel the Rumsfeld line of thinking.
  3. Algorithms are not in line with reality. Mathematicians and programmers are reality. What could these folks do that does not match the Pabst Blue Ribbon beer crowd at a football game? Answer: Generate useless data unrelated to the game and inebriated fans.
  4. Biased algorithms. As I pointed out in this week’s HonkinNews, numbers are neutral. Humans, eh, not often.
  5. Bad hires. There you go. Those LinkedIn expertise graphs can be misleading.
  6. Cost lots of money. Most information technology projects cost a lot of money even when they are sort of right. When they are sort of wrong, one gets Hewlett Packard-Autonomy like deals.
  7. False assumptions. My hunch is that this is Number Two wearing lipstick.
  8. Recommendations unrelated to the business problem at hand. This is essentially Number One with a new pair of thrift store sneakers.
  9. Click an icon, get an answer. The Greek oracles required supplicants to sleep off a heady mixture of wine and herbs in a stone room. Now one clicks an icon when one is infused with a Starbuck’s tall, no fat latte with caramel syrup.
  10. GIGO or garbage in, garbage out. Yep, that’s what happens when one cuts the statistics class when the professor talks about data validity.
  11. Looking for answers the data cannot deliver. See Number Five.
  12. Wonky outcomes. Hey, the real journalist is now dressing a Chihuahua in discarded ice skating attire.
  13. “Blind Faith.” Isn’t this a rock and roll band. When someone has been using computing devices since the person was four years old, that person is an expert and darned sure the computer speaks the truth like those Greek oracles.

Was I more informed after clicking 13 times? Nope.

Stephen E Arnold, August 26, 2016

Russia Boasts of Encryption Keys for Popular Social Messaging Apps

August 25, 2016

If Russia’s Federal Security Service is to be believed, they have devised a way to break through the encryption on some of the world’s biggest messaging apps. The International Business Times reports, “Russia Now Collecting Encryption Keys to Decode Information from Facebook, WhatsApp, and Telegram.” The initiative appears to be a response to pressure from the top; columnist Mary Ann Russon writes:

“In June, Russia passed a scary new surveillance law that demanded its security agencies find a way to conduct better mass surveillance, requiring all internet firms who provide services to citizens and residents in Russia to provide mandatory backdoor access to encrypted communications so the Russian government can know what people are talking about. If any of these internet companies choose not to comply, the FSB has the power to impose fines of up to 1 million rubles (£11,406)….

The article continued:

“The FSB has now updated its website declaring that it has indeed been able to procure a method to collect these encryption keys, although, cryptically, the agency isn’t saying how exactly it will be doing so. The notice on the FSB website simply declares that in order to ensure public safety and protect against terrorism, the FSB has found a ‘procedure of providing the FSB with a method necessary for decoding all received, sent, delivered, and chat conversations between users on messaging networks’ and that this method had been sent to the Ministry of Justice to approve and make provisions to amend federal law.”

At least the Russians are not coy about their efforts to spy on citizens. But, is this a bluff? Without the details, it is hard to say. We do know the government is holding out a carrot to foreign messaging companies—they can continue to operate within their borders if they have their services “certified” by a government-approved lab. Hmm. How much is the Russian messaging market worth to these companies? I suppose we shall see.

Cynthia Murrell, August 25, 2016

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta