Transparency v. Privacy and Security: Getting the Issue Wrong

Last month’s Government Technology included an article entitled Transparency vs. Privacy and Security: What’s an Agency to Do? that responded to President Obama’s call for greater transparency in agency records and government spending. The piece explained that “[w]hile transparency sounds like a good idea and has met with successes, it is not without its challenges and even contradictions,” such as the call for greater protection of personal privacy information. It explained that opening government and protecting privacy are initiatives that often travel in opposite directions.

To be sure, transparency in government decisionmaking can interfere with individuals’ privacy. Releasing agency records pursuant to a FOIA request could reveal an employee’s sensitive personal information. So could the posting of patients’ Social Security numbers on the VA’s website. But framing President Obama’s transparency goals as ineluctably clashing with privacy is the wrong way to look at the issue and risks obscuring the fact that more transparency can, in fact, lead to greater protections for privacy and security. This is especially true with regard to agency information systems entrusted with crucial responsibilities, such as data maching programs that identify individuals for the No-Fly list or Federal Parent Locator Service and computer systems that store and share personal information. These systems are opaque–their closed architecture and source code mean that users cannot discern how a system operates and protects itself. To date, these black boxes have not guaranteed the security and privacy of stored personal information. In fact, the GAO has given government agencies very low marks with regard to the security of their systems and some of the largest data leaks of late have involved agency systems.


Opening up the source code and system design of critical information systems has great potential to enhance accountability, security, and privacy. Daniel Weitzer’s MIT project proposes data mining systems whose infrastructure would be transparent. This would allow viewers to see where information originates and with whom it is shared, thus providing accountability for the use of personal data. At the same time, systems built with open code would not enhance a system’s vulnerability because security is not achieved by concealing security defects but instead by allowing interested programmers to identify flaws that need to be fixed. Open code enlarges the available pool of intelligence, enabling a community of testers to identify bugs and problems with the code. (The only security measures that must remain secret are a system’s changeable secrets, such as its passwords and cryptographic keys.) Revealing the source code incurs only a low-level risk: unlike a warring nation that learns much from discovering an enemy’s military plans, computer attackers learn little from the disclosure of a system’s source code because computer security measures, such as firewalls, have a low level of uniqueness. Transparent systems could potentially protect the privacy of personal data in a way that black box systems cannot. Understanding this potential is particularly important at a time when agencies are amidst replacing legacy systems and building new ones, such as TSA’s upcoming airline passenger screening systems.

You may also like...