Skip to content
June 12, 2014 / Jim Fenton

A Visit to the County Traffic Operations Center

CountyTOC

I recently attended a public meeting to discuss future plans for expressways in Santa Clara County. At that meeting, Dan Collen, the Deputy Director for Infrastructure Development, mentioned that they give tours of the County’s Traffic Operations Center (TOC), the place where they monitor the sensors and control the traffic lights on County expressways. I have always been curious about this, and jumped at the opportunity. After all, how many of us, as little kids, thought there were people who actually sat in those controller cabinets? (I did.)

The TOC is located in San Jose, near the Mercury News facility just off 880. Our tour was shared with a dozen or so officers from the California Highway Patrol. We began in a conference room with a short PowerPoint briefing describing the system. The system covers all Santa Clara County expressways except Capitol Expressway (where installation is still in progress, due to complete in 2016). The system includes:

  • Over 55 miles of 1-gigabit fiber optic network
  • Over 4000 in-road detector loops
  • 400+ high definition cameras
  • 100+ bicycle sensors
  • 25 pedestrian sensors
  • Battery backup providing several hours of operation in the event of a power failure.

 

Expressways congestion map (from http://congestion.countyroads.org)

Expressways congestion map (from http://congestion.countyroads.org)

The coordination system for the traffic lights is much more sophisticated than most of us realize. There are several programs for each of the lights, which kick in based on observed traffic patterns. They obviously have put a lot of effort into fine-tuning the signals to be as efficient as possible. If you think you understand the “rules” for any of these traffic lights, you probably have only scratched the surface. When you consider the amount of time and gasoline saved by expressway users, this system paid for itself in only 11 days. And there are substantial environmental benefits as well.

Most of the bicycle and pedestrian sensors are deployed on the streets crossing the expressways since the lights generally default to green along the expressway. When a bicycle is detected, the lights provide a somewhat longer crossing time than for other vehicles. The pedestrian sensors supplement the crosswalk pushbuttons: If they sense pedestrians taking longer than usual to cross the intersection, they will extend the light cycle to allow the pedestrian to finish crossing. This means that they can program the pedestrian timing for typical pedestrians, rather than needing to accommodate the slowest pedestrians, disabled people, and so forth.

PTZ camera at Foothill Expressway and Main Street, Los Altos

PTZ camera at Foothill Expressway and Main Street, Los Altos

Most of the cameras are in fixed positions viewing a particular direction of traffic on or crossing the expressway. These are supplemented in some locations by pan-tilt-zoom (PTZ) cameras that they can look around with, perhaps to see an accident that might be just out of range of the other cameras. The TOC does not record the video, apparently on direction from the County Council (somewhat to the disappointment of the Highway Patrol people touring with us). However, they are starting to provide video feeds to other jurisdictions, and that may in the future indirectly result in some recording capability. They also make near real-time video available to anyone on the Web.

After the briefing, we went next door into the control room itself. It looked like you might expect: lots of screens on the walls showing the status of various intersections. They also had satellite TV weather and news feeds to be able to anticipate and respond to external factors. They can view groups of cameras at any of the intersection, or get a large view of a single camera as needed.

Several aspects of this operation were quite impressive. A staff of four people operates this system, and these are the same people that maintain the physical facilities. So if somebody drives into one of the traffic signal controllers, it’s one of these same people that will go out and fix it. They also do electrical work at other County facilities, such as the private airports, as well. This is a very lean and efficient operation.

They also didn’t hire outside consultants for the integration of this system; the same staff built it from commercial components, using products from established traffic control vendors. As a result, they know the system inside and out.

Without a facility like this, traffic on our expressways would be substantially worse than it is, affecting quality of life and the ability for our economy to scale. The County Roads people, and all of us in Santa Clara County, have reason to be proud of our very state-of-the-art and efficient Traffic Operations Center.

Some interesting links:

 

April 17, 2014 / Jim Fenton

Adventures with DNSSEC Part 2: Signing my Domain

quill

As I mentioned in a previous article, I have been meaning to deploy DNS security extensions (DNSSEC) on my personal domain for some time. Today I completed the biggest piece of that: signing my domain and getting my domain registrar to publish Delegation Signer (DS) records to allow others to verify the validity of the signatures.

For a couple of years now, I have been waiting for my domain registrar, name.com, to support DNSSEC for the .net top-level domain. I waited and hoped that the new ICANN registrar accreditation agreement, effective this past January, would cause them to provide this support.  But since they seem instead to be much more interested in selling domain registrations under new top-level domains like .ninja, I made the preparations to sign with DNSSEC and expected to change registrars.

Before beginning, I’ll describe my DNS structure. I operate my own primary/master DNS server using BIND 9 on a virtual private server running Debian Linux. This server is also a slave DNS server for a few domains operated by friends. For my slave servers, I have been using a free service provided by Hurricane Electric to its users and participants in its IPv6 certifications (I am one of the latter).

As with my recent deployment of DNSSEC signature checking, a tutorial on HowToForge provided most of the help I needed. With the procedures given in that tutorial, I was able to generate the necessary keys and sign my zone.  I changed my BIND configuration to refer to the signed version of my zone file, and everything seemed to work. I tested my name server using the ‘dig +dnssec’ command, and saw the Resource Record Signature (RRSIG) record that accompanies each resource record returned.

On a hunch, I tried querying one of my Hurricane Electric slave servers. No RRSIG record. I logged into their administrative console to look at the zone file they have, and the RRSIG records were there; they just weren’t being returned in response to queries.

I asked on one of their online forums, and they confirmed that they don’t provide DNSSEC support. I can’t complain; this is a free service, but I was a bit surprised, considering the leadership position HE has taken on IPv6 deployment, that they weren’t further along on DNSSEC. But I thank them for their service and the quick and straightforward response to my question.

So I leased an additional Debian virtual private server (from a different provider, to provide resilience); at $5/month, it won’t break the bank. I set up BIND and a firewall on that server, and made it the secondary/slave for my domain. Of course, I had to ask my registrar to change the DNS “glue” records to allow this new server to be found. After the old name service records timed out, all of my name servers began returning DNSSEC-signed responses.

Step 5 in the tutorial showed how to adjust the trust chain in a resolving DNS server to allow me to verify that the zone was properly signed, prior to actually submitting the DS records to the registrar. This is potentially important to test because if the domain is incorrectly signed, DNSSEC-aware caching servers and resolvers will return SERVFAIL errors, making the domain unresolvable. So I tried this test, but could never get the “ad” (authentic data) flag in my results. I’m still not sure why; it may have been because I used one of my authoritative name servers for this test, rather than setting up a third machine running BIND. All of my other testing had been successful, however, so I decided to go ahead and publish my DS records since my domain is small and only affects me and my family. I would not take that risk on a corporate domain!

As a result of the signing process, a file (dsset-<domain name>) is created on the primary name server containing the DS records for the zone. It is these records, when published (and signed) in the zone of the next-higher domain (.net in my case), that provide a secure linkage between my domain and the rest of the DNS infrastrusture. The process for getting this publication done depends on the domain registrar being used.

As I said, name.com does not support DNSSEC for .net domains. But I happened to see an obscure link at the bottom of the “Nameservers” tab for my domain that said I could create registry-level DNSSEC records for my domain on the DNSSEC Management Page. So I clicked on it. And waited about 1 and a half minutes…

When the page did display, it said, “No supported DNSKEY records were found in DNS. This usually means that your name servers are not properly configured for DNSSEC.” But I checked again, and the DNSKEY records were indeed there so I ignored the warning message.  I viewed the contents of the dsset file and pasted the key tag, algorithm, digest type, and digest into the window. After pressing Submit and another significant delay, the first DS record (digest type 1, SHA-1) was published. But the web page wouldn’t accept the second DS record (digest type 2, SHA-256). Eventually I figured out that an embedded space in the fingerprint was the problem, and the record was accepted successfully.

Given name.com’s stated lack of support for registering DS records in .net, I didn’t expect to get this far. I probably will, in the near future, switch to another registrar that officially supports DNSSEC (as well as providing two-factor authentication for domain management). But starting today, I’m doing what Jeff Moss referred to at Trustycon as protecting others: my domain is DNSSEC-signed.

Image “Desk and quill pens, Carpenter’s Hall” by Flickr user Jim Bullard used under Creative Commons CC-BY-2.0 license.

April 9, 2014 / Jim Fenton

Adventures with DNSSEC Part 1: Checking signatures

Some signatures from the Declaration of Independence

A confession: “Deploy DNSSEC” has been on my to-do list for at least a couple of years. During that time, I have been pinging my domain registrar to allow registration of DS records so that my domain’s DNS can be authenticated properly. I have used their delay in making this possible as an excuse to push that to-do list item down to the bottom.

I recently attended TrustyCon, an alternative security conference held during the RSA Conference a month or so ago. At TrustyCon, Jeff Moss, organizer of DefCon and other security conferences, gave a talk entitled “Community Immunity” that addressed security from a public health point of view: we need to be secure both to protect ourselves and the community. A video of Jeff’s talk is on YouTube, starting at 6:06:00.

Using a caching name server that checks DNSSEC response signatures was one of his examples of protecting oneself. Signing your domain with DNSSEC protects others who use it. Understanding this distinction started me thinking about DNSSEC as not one big thing to do, but two or more. And since I run my own caching name server and checking signatures is supposed to be easy, so why not?

According to Jeff’s talk, turning on DNSSEC verification is as simple as putting the following in your named.conf file (assuming you’re running BIND, of course):

options {
        dnssec-enable yes;
        dnssec-validation yes;
        };

So I thought I’d give it a try, but I did a little homework first.  I found lots of confusing information, including the fact that since BIND 9.5 (I’m running 9.7), dnssec-enable and dnssec-validation both default to yes. This must not be the reason that I’m not checking signatures.

After looking around a while, I found a tutorial on HowToForge that describes the DNSSEC deployment process on Debian Linux in some detail. It doesn’t clearly separate the validation from signing steps clearly, but step 2 told me what was missing: my configuration didn’t specify the root keys that should be trusted by my name server. To fix this problem, I added the line:

include "/etc/bind/bind.keys";

at the bottom of my configuration, and restarted the name server.  It immediately crashed with a segmentation fault!

After a bit of hunting around, I found Debian bug #630733, that describes a segmentation fault under some circumstances when starting BIND. The circumstances didn’t quite match mine, but it gave me the clue I needed: my system had an empty file, /var/cache/bind/managed-keys.bind that was confusing BIND. After removing that empty file, the name server worked fine.

To test, I tried resolving the intentionally mis-signed domain dnssec-failed.org on a machine that uses this name server:

fenton@kernel:~$ dig dnssec-failed.org soa

If the name server is checking signatures, the status returned will be SERVFAIL. If it is not checking signatures, the status will be NOERROR.

Like me, you might be wondering if this is a good idea to do. Might DNS resolution for some domains be broken by inadvertent bad signatures? Using the above dnssec-failed test, I found that both Google (8.8.8.8) and Comcast (75.75.75.75) are checking DNSSEC signatures. If it doesn’t cause a problem for them, it won’t for me.

I’m still struggling with deploying DNSSEC on the signing side, so I’ll leave that for a subsequent article.

 

March 24, 2014 / Jim Fenton

Identity and Attribute Providers

AttributeDistribution

One of the more unconventional but important aspects of the National Strategy for Trusted Identities in Cyberspace (NSTIC) is its model of attribute providers (APs) as distinct from identity providers (IdPs). However, this concept does not seem to be fully embraced by many who are active in the Identity Ecosystem Steering Group (IDESG), the organization that is working to turn the NSTIC vision into a reality.

Identity providers in current identity management systems, primarily those that are enterprise-focused or based on “social identity” like Facebook, act as attribute providers as well. In an enterprise, you would typically pass your login credentials (typically username and password) to an application that would in turn use a protocol like LDAP or Active Directory to verify the credentials with an identity provider. If the credentials are valid, the identity provider returns attributes about you, e.g., name, employee ID, department ID, and job title, to the enterprise application which uses the attributes to decide what you’re authorized to do.

Social login operates somewhat differently because the application isn’t necessarily trusted to receive your credentials. So Facebook Login collects your username and password directly, and uses the OAuth protocol to return your attributes, including name, time zone, friends list, and any links you have shared, to the application requesting the login. As in the enterprise login case, the attribute provider and the identity provider are one and the same.

In the broader context of NSTIC, there are several reasons why identity providers and attribute providers can’t be one and the same:

  • Different attribute providers are authoritative for different attribute classes – In an enterprise, the enterprise itself is authoritative for nearly all attributes of interest. But in the broader NSTIC use case, there isn’t a common point that all parties trust. Users typically will have different providers for different types of attributes: proof that you’re a full-time student might come from your school district or university, an assertion that you’re an adult might come from your motor vehicle department, and your credit-worthiness might come from one of the major credit bureaus. Requiring these all to be asserted by the identity provider requires it to be trusted by basically everyone, and that’s hard to achieve.
  • Users need to be able to choose their identity provider – In the course of processing transactions for you, your identity provider will be exposed to a great deal of information about where you use your identity. For that reason, the principle of IdP choice described in the NSTIC strategy document is very important. In order to make that choice meaningful, we have to minimize the trust in the IdP required by others such as relying parties. Except for self-asserted attributes where there is no trust required, attribute assertions by IdP require relying parties to consider the IdP to be authoritative for those attributes, which severely constrains the possible range of IdPs that users can choose from, making it more difficult for users to find an IdP that they can trust with this intimate information.
  • Support for anonymous and pseudonymous interactions is required – NSTIC recognizes the need to support anonymous and pseudonymous interactions in order to facilitate important uses that might not occur otherwise. If user attributes accompany every use of an online identity, these types of interactions are not possible. An IdP can simply assert an identifier, which should be opaque (not divulging any other information about the user). In many cases, identifiers may also be directed (different for each place you use your identity, so that your activities aren’t as easily correlated) and sometimes ephemeral (different for each session). Depending on the specific use, some attributes might be provided with the consent of the user, such as an assertion that the user is of legal age, without identifying the specific user.
  • Attribute providers must be insulated from sensitive information – When you use your driver’s license to prove that you’re of legal age, the issuer of that license doesn’t generally get information about where that ID has been checked. Given the sensitivity of some online transactions, the same characteristic is desirable: in most cases, the authoritative source for an attribute isn’t entitled to know how and where it is used. For this reason, it may be preferable to route attribute queries through the IdP to insulate the relying party from attribute providers. This characteristic isn’t called out explicitly in the NSTIC, but is a privacy enhancing technology that might be employed to prevent attribute providers from tracking users’ use of their online identities. This, in turn, motivates an arms-length relationship between users’ IdPs and attribute providers.

While some IdPs may also operate attribute providers (particularly for self-asserted attributes, which like the IdP are on behalf of the user), it’s cleaner to think of the IdP and AP as separate functions that may incidentally be operated by the same entity, subject to the arms-length concern mentioned above. More generally, an attribute provider is somewhat like a relying party, in that it receives a trustable assertion of an identifier from the user’s IdP representing that user. IdPs, since they represent the user, may also serve as directories of APs where attributes for a given user can be found. This may also limit the leakage of information about the user that comes from their choice of attribute providers. The use of a particular state DMV as an attribute provider correlates strongly with residence in that state, while the assertion provided might actually be signed on behalf of a broader authority such as AAMVA.

An area where the combined IdP/AP model seems to dominate thinking is identity proofing, which is the binding of an online identity with trusted real-world attributes, such as the user’s legal name. In the combined model, one needs to go through a process, either in person or through association with an existing relationship such as a bank account, prior to the issuance of a credential. This is important because the credentials in these cases often incorporate those identifying attributes, as a driver’s license or government PIV card has your name printed on it and incorporated into a magnetic stripe and/or chip. But when attribute providers are separate, they need an assertion from the user’s IdP to bind the attributes they are verifying to that digital identity, so the credential needs to be issued first. Identity proofing is a function of the attribute provider, not the identity provider, in this model.

The combined IdP/AP thinking also affects how one views a credential. We use the word “credential” extensively in the offline world, to describe a variety of documents and situations ranging from the presentation of a birth certificate to get a passport to the use of that passport to travel internationally. In the NSTIC authentication model, the user presents their credential to his or her IdP. It need not contain any attribute information, because the IdP does not need it. This differs from the combined model, where the relying party obtains information, such as the user’s name or employer, directly from a credential like a government PIV card. But in the NSTIC model, the choice of credential is up to the user and IdP, subject to the requirement that it be sufficiently secure to satisfy the relying party.

Illustration is taken from “Identity Systems”, a presentation I gave in late 2009. The entire presentation is available on Slideshare.

 

March 14, 2014 / Jim Fenton

Commercial vs. Government Surveillance: Which is more dangerous?

Last Sunday evening’s story about data brokers on 60 Minutes is a long-needed heads up to many people about the widespread but largely invisible practices of data brokers who collect, aggregate, and sell information about us.

Monday morning, in an interview at SXSW with Edward Snowden, the question was raised about whether government or commercial surveillance is more of a concern. Snowden’s response was that the government has the ability to prosecute and incarcerate people, that commercial providers don’t, and that we should therefore be more concerned about it:

Right now, my thinking, I believe the majority’s thinking is that the government has the ability to deprive you of rights. Governments around the world whether it is the United States government, whether it is the Yemeni government, whether it is Zaire, any country: they have police powers, they have military powers, they have intelligence powers. They can literally kill you, they can jail you, they can surveil you. Companies can surveil you to sell you products, to sell your information to other companies, and that can be bad, but you have legal recourse.

Chris Soghoian of ACLU, who was interviewing him, correctly observed:

I am not crazy about the amount of data that Google and Facebook collect. Of course, everything they get the government can come and ask for too. There is the collection that the government is doing by itself and then there is the data that they can go to Google and Facebook and force them to hand over.

But that still may not be the whole story. Is there anything to prevent the government from going to the sort of data brokers described in the 60 Minutes report and simply buying the data they’re looking for, in bulk? I haven’t seen any concrete evidence that this is happening, but I would would expect it to be done, either directly or through intermediaries.

Data brokers are typically secretive about the identities of their customers, so it’s highly doubtful that they will admit to this if it is happening.  I wonder if there is any sort of public records request that would reveal the existence of those sorts of contracts. If my hypothesis is correct, commercial surveillance is at least as dangerous as government surveillance, since there isn’t much of a distinction about how the data might ultimately be used. And since much of the information is behavioral, it has more potential for error.

Hopefully it doesn’t take another “Edward Snowden” from the data broker world for us to learn all the ways the data brokers use our data.

March 7, 2014 / Jim Fenton

RightsCon – an international experience

RightsConWeb

I spent the first part of this week at RightsCon, a conference dealing with the hunam rights issues associated with the internet, including freedom of speech, privacy, security/encryption, surveillance, and ensuring unimpeded access to the internet itself. It was organized by Access and attended by 600 or so people from 65 countries. In many ways, this was an atypical conference for me to attend: much more oriented toward policy than the technology issues that I usually focus on. But I enjoy conferences that stretch my experience, and RightsCon was an opportunity to better understand the motivations why people need to protect their privacy on the internet.

I decided to serve as a volunteer for RightsCon, the first time I have attended a conference as a volunteer. This was a fun experience — an opportunity to meet (if briefly) lots of amazing people, help make sessions run smoothly, and help attendees find their sessions. I spent about half the time as a “floater”, and about half staffing the registration desk or information table, or supporting one of the sessions.  When not assigned to other things, we were free to attend sessions. It was a fairly intense three days, but they treated us really well, fed us well, and it was a lot of fun.

I didn’t study the schedule extensively before signing up, so I didn’t do a very good job of specifying the sessions I really wanted to attend. As a result, I missed out on a few sessions that, judging from the Twitter comments, I would have enjoyed. When I volunteer again, I’ll do my homework better.

One of my staffing assignments was for a series of lightning talks on Monday. This included a session from Irina Raicu of the Markkula Center for Applied Ethics at Santa Clara University, presenting on “Are Software Engineers Morally Obligated to Care About Digital Human Rights?” I have met Irina and her colleagues at previous conferences, and this is a critically important topic for the software industry.  The big message is that just because something is legal doesnt make it morally acceptable.

The word ‘diversity’ is overused these days, but it definitely applies here. As I mentioned, 65 countries were represented; this is no small feat considering the difficulty many countries’ citizens have getting a visa to enter the United States. There was a balance of genders (including the GLBT community), and a wide range of attendees’ ages. The conference was greatly enriched as a result. I particularly remember a discussion where we were discussing issues accessing the internet in some countries. A middle-aged man from Sudan was discussing the situation there, and a young woman sitting next to him, who it turns out is from Azerbaidjan, was able to compare that with the situation in her own country. In another session, another attendee sitting next to me spoke up with a comment. He was from Egypt, and commented from the perspective of someone who experienced the turmoil there first-hand.

The feedback I would give the organizers would be to provide more categorization of the sessions in the program. I was looking for more technical content, and a couple of sessions I attended that I thought were more technical turned out not to be. Perhaps some keywords in the schedule would make it easier to choose sessions. I also observed, and heard from others, that several talks on similar topics of interest were scheduled against each other. That might be more obvious with keywords as well.

Overall, it was three very intense days, but time well spent. Next year’s RightsCon is in Manila, Philippines, so I don’t expect to attend, but I learned a lot this week and that was the objective.

February 11, 2014 / Jim Fenton

Some thoughts about Snowden: A middle ground

site

I thought it would be useful to write down my thoughts on the past months’ disclosures from Edward Snowden, the contractor who made off with and has leaked a vast trove of classified NSA documents. The main reason for doing this is to help gel my own opinions, but it may be interesting to others as well. The Snowden situation is very complex, and as you will see I can’t label him as either a hero or a traitor: he is a little of both, or a little of neither.

Read more…

Follow

Get every new post delivered to your Inbox.

Join 995 other followers