Category: Technology

0

3D Printing and Quality Ears. Ears? Expensive Monitors Really

It turns out that musicians wear customized earpieces called monitors to hear the music they make at a concert and to protect their ears from the speakers. A company called Ultimate Ears Pro is in this line of business and uses 3D printing for its next step in creating the devices. As Digital Trends explains the shift is not lowering cost but is increasing the quality:

“Bringing this process in required a tremendous investment in capital, time, resources and training.” Dias explains, which is why 3D printing hasn’t lowered the price points for the devices, as we had imagined. In fact, the company apparently had to take a hit just to keep the pricing the same. Apart from throwing down a hefty load for equipment and software, all of the craftsman who had been working with UE Pro’s in-ear monitors in the traditional method had to completely relearn their craft to work with the new 3D printing technology. As difficult as the process was, the company believes it was necessary to create a revolution in “speed, fit, quality, and comfort” for UE Pro’s monitors.

The company has been mainly serving professional musicians, but is now reaching music lovers too. UEP started from work for Van Halen’s drummer and then its opening act at the time, Skid Row. The desire to keep the quality up is where 3D printing comes in. The turn around time is abut half but given the customer-base, professionals and upscale music lovers, the quality improvement. As Ryan Waniata put it in his article, designers “can be more brazen with their sculpting, allowing them to create a fit for each user that is virtually perfect. And when it comes to in-ears, it’s all about the fit.”

The process still require several other steps including taking a mold of your ear. But the head of UPE mentioned something Gerard and I discussed in Patents, Meet Napster: 3D Printing and the Digitization of Things. Scanners may soon allow someone to get a scan at a store or make the scan themselves.

It’s not magic, but each step may move us to a world of bespoke earpieces for almost everyone. An upgrade for an iPhone or Samsung phone may be supercool headphones, customized and as good as rock stars, which, after all, is what Apple claims we can all be, at least in our heads.

2

There She Is, Your Homemade AR-15

I cannot give a talk about 3D printing without addressing the question of homemade guns. As Gerard and I pointed out in Patents, Meet Napster: 3D Printing and the Digitization of Things, this is America and making guns at home is legal. The issues many faced was whether the gun would work well, fail, or possibly misfire and harm the user. These issues are important as we look at the shifts in manufacturing. Many of us may prefer authorized, branded files and materials for home made goods or prefer to order from a third party that certifies the goods. That said, some gun folks and hobbyists are different. They want to make things at home, because they can. And now, Defense Distributed has made the “Ghost Gunner” “a small CNC milling machine that costs a mere $1200 and is capable of spitting out an aluminum lower receiver for an AR-15 rifle.” That lower is the part the the Federal government regulates.

Accoridng to Extreme Tech, Defense Distributed’s founder Cody Wilson, thinks that “Allowing everyone to create an assault rifle with a few clicks is his way of showing that technology can always evade regulation and render the state obsolete. If a few people are shot by ghost guns, that’s just the price we have to pay for freedom, according to Wilson.” This position is what most folks want to debate. But Gerard and I think something else is revealed here. As ExtremeTech puts it, “This is an entirely new era in the manufacturing of real world objects, in both plastic and metal. It used to be that you needed training as a gunsmith to make your own firearm, but that’s no longer the case.” That point is what motivated me to write about 3D printing and look deeper at digitization and disruption.

The first, short, follow-up on these ideas is in an essay called The New Steam: On Digitization, Decentralization, and Disruption that appeared in Hastings Law Journal this past summer.

Announcing the We Robot 2015 Call for Papers

CommonsRobotHere is the We Robot call for papers, via Ryan Calo:

We Robot invites submissions for the fourth annual robotics law and policy conference—We Robot 2015—to be held in Seattle, Washington on April 10-11, 2015 at the University of Washington School of Law. We Robot has been hosted twice at the University of Miami School of Law and once at Stanford Law School. The conference web site is at http://werobot2015.org.

We Robot 2015 seeks contributions by academics, practitioners, and others in the form of scholarly papers or demonstrations of technology or other projects. We Robot fosters conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate. We particularly encourage contributions resulting from interdisciplinary collaborations, such as those between legal, ethical, or policy scholars and roboticists.

This conference will build on existing scholarship that explores how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. We are particularly interested this year in “solutions,” i.e., projects with a normative or practical thesis aimed at helping to resolve issues around contemporary and anticipated robotic applications.
Read More

Interview on The Black Box Society

BBSBalkinization just published an interview on my forthcoming book, The Black Box Society. Law profs may be interested in our dialogue on methodology—particularly, what the unique role of the legal scholar is in the midst of increasing academic specialization. I’ve tried to surface several strands of inspiration for the book.

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

A More Nuanced View of Legal Automation

A Guardian writer has updated Farhad Manjoo’s classic report, “Will a Robot Steal Your Job?” Of course, lawyers are in the crosshairs. As Julius Stone noted in The Legal System and Lawyers’ Reasoning, scholars have addressed the automation of legal processes since at least the 1960s. Al Gore now says that a “new algorithm . . . makes it possible for one first year lawyer to do the same amount of legal research that used to require 500.”* But when one actually reads the studies trumpeted by the prophets of disruption, a more nuanced perspective emerges.

Let’s start with the experts cited first in the article:

Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerisation could make nearly half of jobs redundant within 10 to 20 years. Office work and service roles, they wrote, were particularly at risk. But almost nothing is impervious to automation.

The idea of “computing” a legal obligation may seem strange at the outset, but we already enjoy—-or endure-—it daily. For example, a DVD may only be licensed for play in the US and Europe, and then be “coded” so it can only play in those regions and not others. Were a human playing the DVD for you, he might demand a copy of the DVD’s terms of use and receipt, to see if it was authorized for playing in a given area. Computers need such a term translated into a language they can “understand.” More precisely, the legal terms embedded in the DVD must lead to predictable reactions from the hardware that encounters them. From Lessig to Virilio, the lesson is clear: “architectural regimes become computational, and vice versa.”

So certainly, to the extent lawyers are presently doing rather simple tasks, computation can replace them. But Frey & Osborne also identify barriers to successful automation:

1. Perception and manipulation tasks. Robots are still unable to match the depth and breadth of human perception.
2. Creative intelligence tasks. The psychological processes underlying human creativity are difficult to specify.
3. Social intelligence tasks. Human social intelligence is important in a wide range of work tasks, such as those involving negotiation, persuasion and care. (26)

Frey & Osborne only explicitly discuss legal research and document review (for example, identification and isolation among mass document collections) as easily automatable. They concede that “the computerisation of legal research will complement the work of lawyers” (17). They acknowledge that “for the work of lawyers to be fully automated, engineering bottlenecks to creative and social intelligence will need to be overcome.” In the end, they actually categorize “legal” careers as having a “low risk” of “computerization” (37).

The View from AI & Labor Economics

Those familiar with the smarter voices on this topic, like our guest blogger Harry Surden, would not be surprised. There is a world of difference between computation as substitution for attorneys, and computation as complement. The latter increases lawyers’ private income and (if properly deployed) contribution to society. That’s one reason I helped devise the course Health Data and Advocacy at Seton Hall (co-taught with a statistician and data visualization expert), and why I continue to teach (and research) the law of electronic health records in my seminar Health Information, Privacy, and Innovation, now that I’m at Maryland. As Surden observes, “many of the tasks performed by attorneys do appear to require the type of higher order intellectual skills that are beyond the capability of current techniques.” But they can be complemented by an awareness of rapid advances in software, apps, and data analysis.
Read More

0

Aereo and the Spirit of Technology Neutrality

aereo_logoAereo is a broadcast re-transmitter. It leases to subscribers access to an antenna that captures over-the-air television, copies and digitizes the signal, and then sends it into the subscriber’s home, on a one-to-one basis, in real time or at the subscriber’s later desire. Aereo was poised to revolutionize the cable business—or hasten its collapse.

At least, it was.

Wednesday the Supreme Court unequivocally held that Aereo infringes copyright law, per Section 106(4) (the Transmit Clause). Aereo’s main backer, Barry Diller, quickly waved the white flag. Aereo is done—and it’s unclear what exactly Justice Breyer’s majority opinion portends for other technologies, despite the majority’s “believ[ing]” that the decision will not harm non-cable-like systems.

As James Grimmelmann succinctly noted amid a flurry of thoughtful tweets, “aereo resolves but it does not clarify.” And that might be an understatement. Eric Goldman notes four unanswered questions. (Amazingly, the majority opinion does not even engage Cablevision.) I’d add to that list the still incredibly vague line demarcating a public performance and the broader issue of technology neutrality in copyright law. (More on technology neutrality in a moment.)

The Court’s opinion relied heavily upon legislative history and, in particular, Congress’s abrogation of two earlier Supreme Court decisions on cable re-transmitters, Fortnightly Corp. v. United Artists Television and Teleprompter Corp. v. CBS. The Aereo Court limited discussion entirely to “cable-like” systems, punted on technologically similar non-cable-like systems, and left a big question about the dividing line.

Overall, the Court came off sounding blind to the technological realities of 2014—in stark contrast to its relatively technology-savvy decision in Riley v. California. (Dan’s take on Riley.)

Margot Kaminski has an excellent post for The New Republic addressing the varying treatment of cloud computing in Aereo and Riley, noting how cloud concerns were waved off in Aereo but factored into the Court ruling that the government normally must get a warrant to search an arrestee’s cell phone. The question, Margot asks, is why the different treatment?

The simplest answer would be that the Court was dealing with two different legal regimes: Constitutional privacy law versus statutory copyright. But at the heart of both decisions, the Court was asked to decide whether an old rule applied to a new technology. In one case, the Court was hesitant, tentative, and deferential to the past legal model. And in the other, the Court was unafraid to adjust the legal system for the disruptive technology of the future.

I’m a fan of simplicity, and I think it is particularly helpful in answering this question.

The Fourth Amendment is dynamic. As Orin Kerr has explained: “When new tools and new practices threaten to expand or contract police power in a significant way, courts adjust the level of Fourth Amendment protection to try to restore the prior equilibrium.” The 1976 Copyright Act is not. And by design.

With the 1976 Copyright Act, Congress adopted the principle of “technology neutrality” for copyrightable subject matter and exclusive rights—to “avoid the artificial and largely unjustifiable distinctions” that previously led to unlicensed exploitation of copyrighted works in an uncovered technological medium.  Rather, the 1976 Act was written to apply to known and unknown technologies.

Read More

Disruption: A Tarnished Brand

I’ve been hearing for years that law needs to be “disrupted.” “Legal rebels” and “reinventors” of law may want to take a look at Jill Lepore’s devastating account of Clay Christensen’s development of that buzzword. Lepore surfaces the ideology behind it, and suggests some shoddy research:

Christensen’s sources are often dubious and his logic questionable. His single citation for his investigation of the “disruptive transition from mechanical to electronic motor controls,” in which he identifies the Allen-Bradley Company as triumphing over four rivals, is a book called “The Bradley Legacy,” an account published by a foundation established by the company’s founders. This is akin to calling an actor the greatest talent in a generation after interviewing his publicist.

Critiques of Christensen’s forays into health and education are common, but Lepore takes the battle to his home territory of manufacturing, debunking “success stories” trumpeted by Christensen. She also exposes the continuing health of firms the Christensenites deemed doomed. For Lepore, disruption is less a scientific theory of management than a thin ideological veneer for pushing short-sighted, immature, and venal business models onto startups:

They are told that they should be reckless and ruthless. Their investors . . . tell them that the world is a terrifying place, moving at a devastating pace. “Today I run a venture capital firm and back the next generation of innovators who are, as I was throughout my earlier career, dead-focused on eating your lunch,” [one] writes. His job appears to be to convince a generation of people who want to do good and do well to learn, instead, remorselessness. Forget rules, obligations, your conscience, loyalty, a sense of the commonweal. . . . Don’t look back. Never pause. Disrupt or be disrupted.

In other words, disruption is a slick rebranding of the B-School Machiavellianism that brought us “systemic deregulation and financialization.” If you’re wondering why many top business scholars went from “higher aims to hired hands,” Lepore’s essay is a great place to start.
Read More

5

Tesla encourages free use of its patents—but will that protect users from liability?

Tesla Motors made big news yesterday with an open letter titled, “All Our Patent Are Belong to You.”

The gist of the letter was that Tesla Motors had decided that, in the interest of growing the market for electric vehicles and in the spirit of open source, it would not enforce its patents against “good faith” users. The key language was at the end of the second paragraph:

Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.

Tesla made clear it was not abandoning its patents, nor did it intend to stop acquiring new patents. Rather, it just wanted clear “intellectual property landmines” that it decided were endangering the “path to the creation of compelling electric vehicles.”

The announcement, made on the company’s website, immediately attracted laudatory media attention. (International Business Times, Los Angeles Times, San Jose Mercury News, Wall Street Journal, etc.) As one commentator for Forbes wrote:

[H]anding out patents to the world is smarter still when you think how resource-sapping the process is. Engineers want to build not fill out paperwork for nit-picking lawyers. Why bog them down with endless red tape form-filling only to end up having to build an expensive legal department to have to defend patents that would likely be got around anyway?

Patents are meant to slow competition but they also slow innovation. In an era when you can invent faster than you can patent, why not keep ahead by inventing?

That’s a pretty concise summary of the general response: Patents are bad, Tesla is good, and all friction in technological innovation would be solved if others followed Tesla’s lead.

Setting aside a pretty loaded normative debate, I had a practical concern. Just how legally enforceable would Tesla’s declaration be? That is, if a technologist practiced one of Tesla’s patents, would they really be free from liability?

The answer isn’t clear. (At least, it wasn’t to a number of us on Twitter yesterday.) Certainly, Tesla could enter into a gratis licensing arrangement with every interested party; a prudent GC should demand that Tesla do so, but it’s unlikely Tesla would want to invest the time and money. In a nod to the vagueness of Telsa’s announcement, CEO Elon Musk also told Wired that “the company is open to making simple agreements with companies that are worried about what using patents in ‘good faith’ really means.”

But assuming Tesla offers nothing more than a public promise not to sue “good faith” users, this announcement may be of little social benefit. Worse, it seems to me that such public promises could provide a new vehicle for trolling.

Sure, Tesla may be estopped from enforcing its patents—though estoppel requires reasonable reliance and this announcement is so vague that it’s difficult to imagine the reliance that would be reasonable—and Tesla isn’t in the patent trolling business anyway. (Sorry, patent-assertion-entity business). But what if Tesla sold its patents or went bankrupt. Could a third party not enforce the patents? If it could, patents promised to be open source would seem a rich market for PAEs.

Tesla is not to first to pledge its patents as open source. In fact, as Clark Asay pointed out, IBM has already been accused of reneging the promise. (See: “IBM now appears to be claiming the right to nullify the 2005 pledge at its sole discretion, rendering it a meaningless confidence trick.”) The questions raised by the Tesla announcement are, thus, not new. And, given enough time, courts will have to answer them.