CyberOSINT banner

Open Source Software Needs a Micro-Payment Program

May 27, 2016

Open source software is an excellent idea, because it allows programmers across the globe to share and contribute to the same project.  It also creates a think tank like environment that can be applied (arguably) to any tech field.  There is a downside to open source and creative commons software and that is it not a sustainable model.  Open Source Everything For The 21st Century discusses the issue in their post about “Robert Steele: Should Open Source Code Have A PayPal Address & AON Sliding Scale Rate Sheet?”

The post explains that open source delivers an unclear message about how code is generated, it comes from the greater whole rather than a few people.  It also is not sustainable, because people do need funds to survive as well as maintain the open source software.  Fair Source is a reasonable solution: users are charged if the software is used at a company with fifteen or more employees, but it too is not sustainable.

Micro-payments, small payments of a few cents, might be the ultimate solution.  Robert Steele wrote that:

“I see the need for bits of code to have embedded within them both a PayPalPayPal-like address able to handle micro-payments (fractions of a cent), and a CISCO-like Application Oriented Network (AON) rules and rate sheet that can be updated globally with financial-level latency (which is to say, instantly) and full transparency. Some standards should be set for payment scales, e.g. 10 employees, 100, 1000 and up; such that a package of code with X number of coders will automatically begin to generate PayPal payments to the individual coders when the package hits N use cases within Z organizational or network structures.”

Micro-payments are not a bad idea and it has occasionally been put into practice, but not very widespread.  No one has really pioneered an effective system for it.

Steele is also an advocate for “…Internet access and individual access to code is a human right, devising new rules for a sharing economy in which code is a cost of doing business at a fractional level in comparison to legacy proprietary code — between 1% and 10% of what is paid now.”

It is the ideal version of the Internet, where people are able to make money from their content and creations, users’ privacy is maintained, and ethics is essential are respected.  The current trouble with YouTube channels and copyright comes to mind as does stolen information sold on the Dark Web and the desire to eradicate online bullying.

 

Whitney Grace, May 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Erdogan Government Cracks down on Turkish Media

May 26, 2016

The Turkish government has been forcibly seizing and intimidating the nation’s media, we learn from “Erdogan’s Latest Media Takeover is About More than Just One Newspaper” at Mashable. Is this the future of publishing?

Turkish police fought protesters and manhandled journalists as the government wrested control of Zaman, Turkey’s most popular newspaper and, as journalist Suna Vidinli puts it, the country’s “last remaining effective voice of criticism in the press.” She continues:

“President Erdogan had long planned to take over Zaman as the paper was affiliated with Gulen Group, his main remaining adversary in his quest for absolute power. Earlier in the week, the Turkish Supreme Court — in a surprising and rare move — had released two top editors of Cumhuriyet, Can Dundar and Erdem Gul, from prison. They were imprisoned for writing about the illegal trafficking of weapons to radicals in Syria.

“Erdogan saw their release as a direct move against his authority and wowed to show who was boss. He signaled that the two journalists would be put back in prison soon and declared ‘things can get shaky in the following days.’ Hence, the takeover of Zaman was carefully planned as the most brutal confiscation of media to date in Turkish history.

“The confiscation of Zaman media group highlights some critical developments in Turkey. The government immediately took the media group offline, and a special tech team was brought in to completely wipe out the news archive and web content of the newspaper.”

The Chihan News Agency was also included in the seizure, a group we learn was the only non-governmental organization to monitor Turkish exit polls to ensure fair elections. The article notes that the remaining independent media in Turkey seem to have been effectively cowed, since none of them reported on the violent takeover. Governments, media groups, and human rights organizations around the world condemned the seizure; the U.S. State Department called Turkey’s pattern of media suppression “troubling.” We couldn’t agree more.

 

Cynthia Murrell, May 26, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

The Guardian Adheres to Principles

May 20, 2016

In the 1930s, Britain’s newspaper the Guardian was founded, through a generous family’s endowment, on the ideas of an unfettered press and free access to information. In continued pursuit of these goals, the publication has maintained a paywall-free online presence, despite declining online-advertising revenue. That choice has cost them, we learn from the piece, ”Guardian Bet Shows Digital Risks” at USA Today. Writer Michael Wolff explains:

“In order to underwrite the costs of this transformation, most of the trust’s income-producing investments have been liquidated in recent years in order to keep cash on hand — more than a billion dollars.

“In effect, the Guardian saw itself as departing the newspaper business and competing with new digital news providers like BuzzFeed and Vox and Vice Media, each raising ever-more capital from investors with which to finance their growth. The Guardian — unlike most other newspapers that are struggling to make it in the digital world without benefit of access to outside capital — could use the interest generated by its massive trust to indefinitely deficit-finance its growth. At a mere 4% return, that would mean it could lose more than $40 million a year and be no worse for wear.

“But … the cost of digital growth mounted as digital advertising revenue declined. And with zero interest rates, there has been, practically speaking, no return on cash. Hence, the Guardian’s never-run-out endowment has plunged by more than 12% since the summer and, suddenly looking at a finite life cycle, the Guardian will now have to implement another transition: shrinking rather than expanding.”

The Guardian’s troubles point to a larger issue, writes Wolff; no one has been able to figure out a sustainable business model for digital news. For its part, the Guardian still plans to avoid a paywall, but will try to coax assorted fees from its users. We shall see how that works out.

 

Cynthia Murrell, May 20, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

 

 

Google Moonshot Targets Disease Management, but Might Face Obstacle with Google Management Methods

May 17, 2016

The article on STAT titled Google’s Bold Bid to Transform Medicine Hits Turbulence Under a Divisive CEO explores Google management methods for one of its “moonshot” projects. Namely, the massive company has directed its considerable resources toward overhauling medicine. Verily Life Sciences is the three year-old startup with a mysterious mission and a controversial leader in Andrew Conrad. So far, roughly a dozen Verily players have abandoned the project.

“But “if they are getting off the roller coaster before it gets to the first dip,” something looks seriously wrong, said Rob Enderle, a technology analyst who has tracked Google since its inception. Those who depart well-financed startups usually forsake potential financial windfalls down the line, which further suggests that the people leaving Verily “are losing confidence in the leadership,” he said. No similar brain drain has occurred at Calico, another ambitious Google spinoff, which is focused on increasing the human lifespan.”

Given the scope of the Verily project, which Sergey Brin, Google co-founder, announced that he hoped would significantly change the way we identify, avoid, and handle illness, perhaps Conrad is cracking under the stress. He has maintained complete radio silence and rumors abound that his employees operate under threat of termination for speaking to a reporter.

Chelsea Kerwin, May 17, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

An Open Source Search Engine to Experiment With

May 1, 2016

Apache Lucene receives the most headlines when it comes to discussion about open source search software.  My RSS feed pulled up another open source search engine that shows promise in being a decent piece of software.  Open Semantic Search is free software that cane be uses for text mining, analytics, a search engine, data explorer, and other research tools.  It is based on Elasticsearch/Apache Solrs’ open source enterprise search.  It was designed with open standards and with a robust semantic search.

As with any open source search, it can be programmed with numerous features based on the user’s preference.  These include, tagging, annotation, varying file format support, multiple data sources support, data visualization, newsfeeds, automatic text recognition, faceted search, interactive filters, and more.  It has the benefit that it can be programmed for mobile platforms, metadata management, and file system monitoring.

Open Semantic Search is described as

“Research tools for easier searching, analytics, data enrichment & text mining of heterogeneous and large document sets with free software on your own computer or server.”

While its base code is derived from Apache Lucene, it takes the original product and builds something better.  Proprietary software is an expense dubbed a necessary evil if you work in a large company.  If, however, you are a programmer and have the time to develop your own search engine and analytics software, do it.  It could be even turn out better than the proprietary stuff.

 

Whitney Grace, May 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Watson Joins the Hilton Family

April 30, 2016

It looks like Paris Hilton might have a new sibling, although the conversations at family gatherings will be lackluster.  No, the hotel-chain family has not adopted Watson, instead a version of the artificial intelligence will work as a concierge.  Ars Technica informs us that “IBM Watson Now Powers A Hilton Hotel Robot Concierge.”

The Hilton McLean hotel in Virginia now has a now concierge dubbed Connie, after Conrad Hilton the chain’s founder.  Connie is housed in a Nao, a French-made android that is an affordable customer relations platform.  Its brain is based on Watson’s program and answers verbal queries from a WayBlazer database.  The little robot assists guests by explaining how to navigate the hotel, find restaurants, and tourist attractions.  It is unable to check in guests yet, but when the concierge station is busy, you do not want to pull out your smartphone, or have any human interaction it is a good substitute.

” ‘This project with Hilton and WayBlazer represents an important shift in human-machine interaction, enabled by the embodiment of Watson’s cognitive computing,’ Rob High, chief technology officer of Watson said in a statement. ‘Watson helps Connie understand and respond naturally to the needs and interests of Hilton’s guests—which is an experience that’s particularly powerful in a hospitality setting, where it can lead to deeper guest engagement.’”

Asia already uses robots in service industries such as hotels and restaurants.  It is worrying that Connie-like robots could replace people in these jobs.  Robots are supposed to augment human life instead of taking jobs away from it.  While Connie-like robots will have a major impact on the industry, there is something to be said for genuine human interaction, which usually is the preference over artificial intelligence.  Maybe team the robots with humans in the service industries for the best all around care?

 

Whitney Grace, April 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

A Dark Web Spider for Proactive Protection

April 29, 2016

There is a new tool for organizations to more quickly detect whether their sensitive data has been hacked.  The Atlantic discusses “The Spider that Crawls the Dark Web Looking for Stolen Data.” Until now, it was often many moons before an organization realized it had been hacked. Matchlight, from Terbium Labs, offers a more proactive approach. The service combs the corners of the Dark Web looking for the “fingerprints” of its clients’ information. Writer Kevah Waddell reveals how it is done:

“Once Matchlight has an index of what’s being traded on the Internet, it needs to compare it against its clients’ data. But instead of keeping a database of sensitive and private client information to compare against, Terbium uses cryptographic hashes to find stolen data.

“Hashes are functions that create an effectively unique fingerprint based on a file or a message. They’re particularly useful here because they only work in one direction: You can’t figure out what the original input was just by looking at a fingerprint. So clients can use hashing to create fingerprints of their sensitive data, and send them on to Terbium; Terbium then uses the same hash function on the data its web crawler comes across. If anything matches, the red flag goes up. Rogers says the program can find matches in a matter of minutes after a dataset is posted.”

What an organization does with this information is, of course, up to them; but whatever the response, now they can implement it much sooner than if they had not used Matchlight. Terbium CEO Danny Rogers reports that, each day, his company sends out several thousand alerts to their clients. Founded in 2013, Terbium Labs is based in Baltimore, Maryland. As of this writing, they are looking to hire a software engineer and an analyst, in case anyone here is interested.

 

Cynthia Murrell, April 29, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Developing Nations Eager to Practice Cyber Surveillance

April 28, 2016

Is it any surprise that emerging nations want in on the ability to spy on their citizens? That’s what all the cool governments are doing, after all. Indian Strategic Studies reports, “Even Developing Nations Want Cyber Spying Capabilities.” Writer Emilio Iasiello sets the stage—he contrasts efforts by developed nations to establish restrictions versus developing countries’ increased interest in cyber espionage tools.

On one hand, we could take heart from statements like this letter and this summary from the UN, and the “cyber sanctions” authority the U.S. Department of Treasury can now wield against foreign cyber attackers. At the same time, we may uneasily observe the growing popularity of FinFisher, a site which sells spyware to governments and law enforcement agencies. A data breach against FinFisher’s parent company, Gamma International, revealed the site’s customer list. Notable client governments include Bangladesh, Kenya, Macedonia, and Paraguay. Iasiello writes:

“While these states may not use these capabilities in order to conduct cyber espionage, some of the governments exposed in the data breach are those that Reporters without Borders have identified as ‘Enemies of the Internet’ for their penchant for censorship, information control, surveillance, and enforcing draconian legislation to curb free speech. National security is the reason many of these governments provide in ratcheting up authoritarian practices, particularly against online activities. Indeed, even France, which is typically associated with liberalism, has implemented strict laws fringing on human rights. In December 2013, the Military Programming Law empowered authorities to surveil phone and Internet communications without having to obtain legal permission. After the recent terrorist attacks in Paris, French law enforcement wants to add addendums to a proposed law that blocks the use of the TOR anonymity network, as well as forbids the provision of free Wi-Fi during states of emergency. To put it in context, China, one of the more aggressive state actors monitoring Internet activity, blocks TOR as well for its own security interests.”

The article compares governments’ cyber spying and other bad online behavior to Pandora’s box. Are resolutions against such practices too little too late?

 

Cynthia Murrell, April 28, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Project Cumulus Tracks Stolen Credentials

April 26, 2016

Ever wonder how far stolen information can go on the Dark Web? If so, check out “Project Cumulus—Tracking Fake Phished Credentials Leaked to Dark Web” at Security Affairs. Researchers at Bitglass baited the hook and tracked the mock data.  Writer Pierluigi Paganini explains:

“The researchers created a fake identity for employees of a ghostly retail bank, along with a functional web portal for the financial institution, and a Google Drive account. The experts also associated the identities with real credit-card data, then leaked ‘phished’ Google Apps credentials to the Dark Web and tracked the activity on these accounts. The results were intriguing, the leaked data were accessed in 30 countries across six continents in just two weeks. Leaked data were viewed more than 1,000 times and downloaded 47 times, in just 24 hours the experts observed three Google Drive login attempts and five bank login attempts. Within 48 hours of the initial leak, files were downloaded, and the account was viewed hundreds of times over the course of a month, with many hackers successfully accessing the victim’s other online accounts.”

Yikes. A few other interesting Project Cumulus findings: More than 1400 hackers viewed the credentials; one tenth of those tried to log into the faux-bank’s web portal; and 68% of the hackers accessed Google Drive through the Tor network. See the article for more details. Paganini concludes with a reminder to avoid reusing login credentials, especially now that we see just how far stolen credentials can quickly travel.

 

Cynthia Murrell, April 26, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

New York Times: Editorial Quality in Action

April 22, 2016

On April 14, 2016, I flipped through my dead tree copy of the New York Times. You know. The newspaper which is struggling to sell more copies than McPaper. What first caught my eye was this advertisement for a dead tree book called “ The New York Times Manual of Style and Usage: The Official Style guide Used by the Writers and Editors of the World’s Most Authoritative News Organization. I assume this manual was produced by “real” journalists and editors. I am not familiar with this book, although I was aware of its existence. The addled goose uses the style set forth in the classic Tressler Christ circa 1958. Oh, you may be able to read a version of the New York Times story at this link. Keep in mind that you may have to pay pay pay.

image

I noted in the very same edition of the dead tree edition of the New York Times this write up about a football (soccer) match. I know that the “real” journalists working in Midtown are probably not into the European Cup if there is a Starbuck’s nearby.

I noted this interesting stylistic touch:

image

I spotted two paragraphs which are mostly the same. I assume that the new edition of the Style and Usage volume is okay with duplicate passages. It is tough to determine which is the “correct” paragraph.

Tressler Christ, as I recall, suggested that writing the same passage twice in a row was not a good move in 1958. The reality of the cost conscious New York Times may be that it is okay to pontificate and then duplicate content.

Nifty. I will try this some time.

Nifty. I will try this some time.

Nifty. I will try this some time.

Nifty. I will try this some time.

See. Not annoying annoying annoying at all.

Stephen E Arnold, April 22, 2016

Next Page »