## Quickies: Edu, rep, trust, ossl 1.0.0, ssl, rekey, body scan, leaks, net card exploit, runway

March 29th, 2010

So, I was out on a date, and we started discussing IT, security, and productivity. She was explaining some of the frustrations with getting approval for business critical applications and tools at a certain employer, as well as the obstacles certain security configuration had on getting the job done efficiently. The end result was that people looked for ways to bypass the IT department and its controls.

Stating the obvious, technology, and security, need to be incorporated into business processes, into helping people get their jobs done and protecting their ability to do so. When tension arises between, say, IT department policies and procedures and how people actually do their jobs, people are going to start looking for ways to circumvent those policies and procedures. After all, if a firm is not productive, not providing value, it won’t exist for long.

Which reminded me of this paper that has popped up in numerous places,

First, we need better understanding of the actual harms endured by users. [...] A main finding of this paper is that we need an estimate of the victimization rate for any exploit when designing appropriate security advice. [...]

Second, user education is a cost borne by the whole population, while offering benefit only to the fraction that fall victim. [...]

Third, retiring advice that is no longer compelling is necessary. [...]

Fourth, we must prioritize advice. In trying to defend everything we end up defending nothing. [...] In fact prioritizing advice may be the main way to influence the security decisions that users make. [...]

Finally, we must respect users’ time and effort. [...] We must understand that when budgets are exhausted, attention to any one piece of advice is achieved only by neglect of something else. [...]

More focused on communities (e.g., companies, agencies, etc.) smaller than the general population of Internet users, I wrote this about user education way back when.

First off, we need a threat model. We need to figure out what we want to protect, its value, the potential attacks, the likelihood of those attacks, and the potential damage of those attacks. Determine the risks, and then mitigate them. Very important here is figuring out who needs to be held responsible for what mitigations and countermeasures. And, this threat model has to be reviewed periodically to keep it up to date. The assets that need to be protected change, the way business is done changes, the risks change, the mitigations change. Security is not static.

With that (and building that is certainly not trivial), we have these people, you, me, our parents, that are part of this threat model. They pose risks, and they help to mitigate risks. We need to minimize the former and maximize the latter. To do this right, I think we need people to feel responsible for security. To build this sense of responsibility, we need these security responsibilities audited and we need effective training to convey and reinforce these responsibilities – the combination of these two may be a linchpin to people security.

So, we talk to people about our threat model. Not only do we teach it, but we get feedback on it. And, we make sure everyone understands that threat model, and their place in it. To do this, we bring vivid examples from security audits into our security education, which help to build security training programs that provide exactly that which we learn the most from, experience.

(Side note, I may have gone kind of crazy with the audited training in that post.

Now, pull the results of these attacks into your security training. Will there be an impact?

Well, I think so. You don’t quickly forget seeing yourself and/or those around you up in lights, as it were, and the attacks can certainly be used to increase the sense of responsibility felt by every employee. The demonstrations hit home because they can be related to – the attacks happened to you, your neighbors, your community. Whether the attacks succeed or fail, they amount to a shared experience for the organization, and teach people their importance to the security of their organization.

I think people would feel like their security responsibilities are not just words but actual obligations which have real consequences to the security of an organization. The threat model lays this out, but the experience drives it home. Also, people would be aware that their organization takes security seriously and is willing to proactively audit that security. And, those audits involve real life employees, doing the right thing and maybe even the wrong thing. It lets people know that they are at the root of security.

However, when I look at things like the foiled airplane bombing attempt on December 25, 2009, I do see a kind of user education inline with that excerpt playing an important role. The people tackled the alleged bomber because they knew their lives and the lives of others might depend on it. The experiences of, say, 2001-09-11 were taken to heart. They were the last line of defense, and they knew it.)

-

Perhaps a related bit on cooperation and reputation,

The possibly irritating message is that for promoting cooperative behavior, punishing works much better than rewarding. In both cases, however, reputation is essential.

-

And, I saw this on trust.

I’ve long advocated instead saying “is vulnerable to”, which makes it much clearer what is going on, so I would say “CNNIC is a certificate authority everyone is vulnerable to”. “Trusted third party” would become “Third party you are vulnerable to” and so on. Kinda clunky, but you know where you stand.

This ramble of mine discussed trust in these terms.

There is this big word we have all said before, trust. Trust can mean lots of things, but, at its heart, trust implies risk or vulnerability. Trust is about having faith that someone or something will act a certain way when it counts. While you may be able to influence that someone or something to act that certain way, in the end, the actions are out of your total control.

And, this other ramble of mine may have been criticized as naive optimism, but it also made this sort of point.

There are lots of inputs to trust – inputs like direct experiences, such as not dying when eating food from a particular restaurant or how you build friendships, or indirect experiences, such asking a friend about what plumber to use or credit ratings. We use these inputs to calculate levels of trust, which are basically estimates of how much vulnerability we are willing [to] expose to the trusted. This leads to trade-offs, such as limiting the amount of trust you place in someone/something – you might only take a nibble of that unknown food to limit the risk of getting sick if it disagrees with you, or divide tasks amongst a group of people to limit the risk of any one person having too much access – versus the costs of these limitations – that nibble is not enough to meet your nutritional requirements forcing you to seek out additional food, and all those people/processes mean less work getting done. This is a balancing act, if you will.

-

Look at that,

The OpenSSL project team is pleased to announce the release of version 1.0.0 of our open source toolkit for SSL/TLS. This new OpenSSL version is a major release and incorporates many new features as well as major fixes compared to 0.9.8n. For a complete list of changes, please see http://cvs.openssl.org/getfile?f=openssl/CHANGES&v=OpenSSL_1_0_0.

Congratulations to the OpenSSL team!

-

So, this has been making the rounds.

At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications — without breaking the encryption — by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities.

The paper can be found here. Matt Blaze discusses some implications.

What this means is that an eavesdropper who can obtain fake certificates from any certificate authority can successfully impersonate every encrypted web site someone might visit. Most browsers will happily (and silently) accept new certificates from any valid authority, even for web sites for which certificates had already been obtained. An eavesdropper with fake certificates and access to a target’s internet connection can thus quietly interpose itself as a “man-in-the-middle”, observing and recording all encrypted web traffic traffic, with the user none the wiser.

A while back, I had to do some research on web filtering technologies. It was quite standard for enterprise level web filtering to include SSL MITM functionality. Generally, this involved taking advantage of having an enterprise’s CA certificates rolled out to end users, but it could have leveraged any trusted root certificates.

Anyway, I noted the following posted about.

The Monkeysphere project’s goal is to extend OpenPGP’s web of trust to new areas of the Internet to help us securely identify each other while we work online. The suite of monkeysphere utilities provides a framework to leverage the web of trust for authentication of HTTPS0< (TLS) and SSH communications.

In other words, Monkeysphere allows you to use your web browser or secure shell as you normally do, but you can use the OpenPGP Web of Trust to identify the servers you connect to and to prove your own identity to them. This brings to the web and ssh the possibility for key transitions, transitive identifications, revocations, and expirations of public keys1. It also actively invites broader participation in the OpenPGP web of trust.

That reminds me of something I wrote a while back.

This is an active research area. Petnames have been proposed, which I like (think PGP web of trust in some form). This has similarities to the SSH-type trust model, which has also been proposed and which I also like. In recent minutes to an IETF-PKIX meeting, the Opera people were looking at “extended validation” certificates. There has been all sorts of talk, pros and cons, about “high-assurance” certificates.

-

I saw this rant about rekeying by one of the people that had to deal with the SSL rekeying mess.

It’s IETF time again and recently I’ve reviewed a bunch of drafts concerned with cryptographic rekeying. In my opinion, rekeying is massively overrated, but apparently I’ve never bothered to comprehensively address the usual arguments. Now seems like as good a time as any…

This post to the Matzger’s Cryptography mailing list by Adam Back seemed most in line with my thinking.

Another angle on this is timing attacks or iterative adaptive attacks like bleichenbacher’s attack on SSL encryption padding. If re-keying happens before the attack can complete, perhaps the risk of a successful so far unnoticed adaptive or side-channel attack can be reduced. So maybe there is some use.

Simplicity of design can be good too.

-

Complete shocker (note: the link is to a news article, but there is a quote therein containing a possibly profane word).

BAA is investigating an incident in which a Heathrow security operative “ogled” a female colleague who’d wandered into a body scanner, the Sun reports.

John Laker, 25, allegedly copped an eyeful of Jo Margetson, 29, when the latter “entered the X-ray machine by mistake”. She was “horrified” as Laker “pressed a button to take a revealing photo” and remarked: “I love those gigantic[...]

I should note that the ubiquity of digital video equipment (e.g., cell phones) renders moot whether or not these scanners record images.

-

Speaking of leaks, I saw this.

Here’s the background: Secure web connections encrypt traffic so that only your browser and the web server you’re visiting can see the contents of your communication. Although a network eavesdropper can’t understand the requests your browser sends, nor the replies from the server, it has long been known that an eavesdropper can see the size of the request and reply messages, and that these sizes sometimes leak information about which page you’re viewing, if the request size (i.e., the size of the URL) or the reply size (i.e., the size of the HTML page you’re viewing) is distinctive.

Consider a search engine that autocompletes search queries: when you start to type a query, the search engine gives you a list of suggested queries that start with whatever characters you have typed so far. When you type the first letter of your search query, the search engine page will send that character to the server, and the server will send back a list of suggested completions. Unfortunately, the size of that suggested completion list will depend on which character you typed, so an eavesdropper can use the size of the encrypted response to deduce which letter you typed. When you type the second letter of your query, another request will go to the server, and another encrypted reply will come back, which will again have a distinctive size, allowing the eavesdropper (who already knows the first character you typed) to deduce the second character; and so on. In the end the eavesdropper will know exactly which search query you typed. This attack worked against the Google, Yahoo, and Microsoft Bing search engines.

The paper can be found here.
-

Remembering the exploitation of wireless cards and their drivers, here comes some other fun stuff with network cards presented at CanSecWest. [via cypherpunks mailing list]

The presentation was entitled “Can you still trust your network card?”. The talk explained how an attacker could be able to exploit a flaw to run arbitrary code inside some network controllers (NICs). The attack uses routable packets delivered to the victim’s NIC. Consequently, multiple attacks can be conducted including: Man in The Middle attacks on network connections, access to cryptographic keys on the host platform, or malware injection on the victim’s computer host platform (see SS 2).

The slides can be found here. From slide 38,

On this particular NIC and firmware version, an attacker is able to
perform arbitrary code execution:

Initial jump
->an attacker can overwrite a return address in the stack;
->she can find a stable (for a firmware version) memory address for username;
->she can put exploit code in username and jump there.

Game over at slide 40,

Now the attacker can:
->run arbitrary code on the RX RISC;
->provide new code using simple packets;
->rewrite the firmware if needed;

In between the excerpts is a summary of the attack. Slide 49 is a hoot too.

Also from CanSecWest, kernel exploitation is the new black by the people that post content at the cr0 blog, such as this.

-

To end this rainy day on a fun note, the following caught my eye.

The same post at Quomodocumque has this completely odd video of an interview with William Thurston and fashion designer Dai Fujiwara. Apparently, Thurston provided the inspiration for Issey Miyake’s fall fashions, “8 Geometry Link Models as Metaphor of the Universe”.

You can see the finale of their Paris fashion show here, including Thurston joining Fujiwara on stage.[...]

## FBSD 7 stable to 8 stable, cryptome, twitenc, fact fault, tor, hidden, limulus

March 11th, 2010

Upgrading from FreeBSD 7 stable to FreeBSD 8 stable went smoothly. What follows are the general steps I followed. (FreeBSD handbook guidance can be found here for the base system and here for the ports.)

• Modify my custom kernel configuration for the FreeBSD 8 kernel.

(The generic FreeBSD kernel configuration for the i386 platform as a starting point can be found in “/usr/src/sys/i386/conf/GENERIC”. e.g.,

• cp /usr/src/sys/i386/conf/GENERIC /usr/mykernconf8
• ln -s /usr/mykernconf8 /usr/src/sys/i386/conf/mykernconf8
• vi /usr/mykernconf8 and modify as desired)
• vi /usr/stable-supfile and update my custom “stable-supfile” to point to “RELENG_8″ by changing the relevant line to “*default release=cvs tag=RELENG_8″

(A sample “stable-supfile” as a starting point can be found in “/usr/share/examples/cvsup/stable-supfile”. e.g.,

• cp /usr/share/examples/cvsup/stable-supfile /usr/stable-supfile
• vi /usr/stable-supfile and modify as needed, such as setting the default release to 8 and setting the host to one of the FreeBSD cvs server mirrors)
• cd /usr/src
• cvsup -g -L 2 ../stable-supfile (update source tree)
• make buildworld (build system binaries, manpages, etc.)
• make kernel KERNCONF=mykernconf8 (build and install kernel)
• reboot (into single user mode)
• cd /usr/src
• mergemaster -p (prepare for merge of updated scripted and configuration files)
• make installworld (install system binaries, manpages, etc.)
• mergemaster (merge in updated scripts and configuration files)

There was a version increment of the standard contents in “/etc” from 7 to 8, so there was much to sift through. I had made modifications to a few configuration files and scripts that had to be merged, but, for the majority, I just installed the new version.

• reboot

Next came the joy of rebuilding all the ports. I chose to go with installing pre-built packages (where available), and subsequently rebuilding those few ports that I had customized.

• cd /usr/ports
• cvsup -g -L 2 ../ports-supfile (update ports tree)

Examine “/usr/ports/UPDATING” to see if there were any special instructions relevant to upgrading the installed ports and “/usr/ports/MOVED” to see what ports have been (re)moved.

• portsdb -Fu (fetch new index and build db)
• pkgdb -F (check package registry)
• portupgrade -nvOpPfa (to see what is expected to happen without performing the actual upgrade)
• portupgrade -vOpPfa (this upgrades installed ports from pre-built packages where available; otherwise, it builds and installs the ports from the source, and creates packages for them.)
• portupgrade -pf <all those ports that I had custom configs or otherwise made tweaks>
• (build and install the specified ports from source, and create packages for them)

• pkgdb -FL (check package registry and look for lost dependencies)
• portsclean -CDDLP (clean up working dirs, distros, packages, and libraries)

-

Long time readers of this blog may remember that I attended a talk given by John Young of Cryptome.

I attended the panel discussion on “The Secret World of Global Eavesdropping” yesterday, as mentioned in a previous post. It was composed of Patrick Radden Keefe, moderator, and John Young, primary speaker. (Robert Windrem did not attend.)

[...]

For those that don’t know, Cryptome.org publishes information on national security, intelligence, cryptography, etc. with a technical focus.

Well, it seems Cryptome has been in the news a bit of late.

1

Microsoft has managed to do what a roomful of secretive, three-letter government agencies have wanted to do for years: get the whistleblowing, government-document sharing site Cryptome shut down.

Microsoft dropped a DMCA notice alleging copyright infringement on Cryptome’s proprietor John Young on Tuesday after he posted a Microsoft surveillance compliance document that the company gives to law enforcement agents seeking information on Microsoft users.[...]

2

In a bizarre up-and-down — literally — series of events, the controversial site Cryptome.org was forced offline yesterday after posting a sensitive Microsoft document on its site and was back online today.

3

PayPal has finally made good on its pledge to restore Cryptome’s account many hours after the firm’s head of global communications told Register readers it had already done so.

-

Ok.

As i announced yesterday, the new version of shrimp7 (31) support compression for your Twitter, Facebook and Friendfeed posts. With that technique you will be able to post messages that are longer then 140 characters. When i implemented message compression i had the idea to implement a AES-256bit encryption method for shrimp7 version 32. I use the same principles as the compression method i use. After encryption I’ll add 2 characters in front of the string so applications could recognize compressed or encrypted messages.

If you weren’t laughing already, from the cypherpunks mailing list…

Date: Sun, 7 Feb 2010 21:17:30 -0800
Subject: Re: 256-bit encryption for Twitter posts
From: coderman
To: Ted Smith
Cc: Cypherpunks list

On Sun, Feb 7, 2010 at 3:17 PM, Ted Smith wrote:
> …
> 256-bit encryption for Twitter posts
>….
> .?WsSMSoaGhoZFjHZzQzx7iOZ
> +GKmXXcyD hq0iEBExlReVG2f0ACO256i84cOC7QlxO/txTuRdkQwL
> +fBGZlcUQBQoDHLLm/3cFbEEW3ZU8I/CD63wfgpGbAx+eH9oPAmVyYv14Y=

i say again:
twitter is ruining the internets…

-

Factoring.

On December 12, 2009, we factored the 768-bit, 232-digit number RSA-768 by the number field sieve (NFS, [20]). The number RSA-768 was taken from the now obsolete RSA Challenge list [38] as a representative 768-bit RSA modulus (cf. [37]). This result is a record for factoring general integers. Factoring a 1024-bit RSA modulus would be about a thousand times harder, and a 768-bit RSA modulus is several thousands times harder to factor than a 512-bit one. Because the first factorization of a 512-bit RSA modulus was reported only a decade ago (cf. [7]) it is not unreasonable to expect that 1024-bit RSA moduli can be factored well within the next decade by an academic effort such as ours or the one in [7]. Thus, it would be prudent to phase out usage of 1024-bit RSA within the next three to four years.

Implementation fault abuse.

1

In this work we described an end-to-end attack to a RSA authentication scheme on a complete FPGA-based SPARC computer system. We theorized and implemented a novel fault-based attack to the fixed-window exponentiation algorithm and applied it to the well known and widely used OpenSSL libraries. In doing so we discovered and exposed a major vulnerability to fault-based attacks in a current version of the libraries and demonstrated how this attack can be perpetrated even with limited computational resources.

2

We employed with success the induced faults in order to lead attacks against industry grade implementations of the RSA and the AES cryptosystems. Moreover we devised two new attack techniques, one for each cryptosystem and have been able to validate their practical effectiveness with a thorough experimental campaign. We were able to successfully break the AES cipher employing only 4kB of faulty ciphertext, to retrieve an RSA encrypted plaintext using at most 5 faulty ciphertexts regardless of the size of the modulus and to factor the RSA modulus employing at most two faulty signatures. After conducting the whole experimental campaign no signs of tampering were left on the attacked device, thus proving that the employed technique is not invasive and does not alter the further functioning of the device. The attack technique is fully realizable with low cost off-the-shelf instruments which is a significant strong asset of the proposed attack technique.

-

Back in January, Tor 0.2.1.22 was released (as of this writing, 0.2.1.24 is the current stable release). In the announcement,

Tor 0.2.1.22 rotates two of the seven v3 directory authority keys and locations, due to a security breach of some of the Torproject servers: http://archives.seul.org/or/talk/Jan-2010/msg00161.html

From the referenced message,

In early January we discovered that two of the seven directory authorities were compromised (moria1 and gabelmoo), along with metrics.torproject.org, a new server we’d recently set up to serve metrics data and graphs. The three servers have since been reinstalled with service migrated to other servers.

-

I’ve brought up hidden assumptions often enough here, so this resonated.

The rules surrounding markets matter a lot–and the reason we don’t know this is that the rules that work have disappeared into the background, faded out of our consciousness, become part of the miasma of “the market”. For example, I recall a web debate years ago in which someone made the standard point that cartels are very difficult to hold together, which means anti-trust rules about this sort of thing have dubious utility. I believe it was Eugene Volokh who pointed out that this was true . . . but only because courts refused to enforce cartel agreements. If courts did enforce them, cartels would work pretty well–which is why we still have professional sports leagues.

-

Limulus is an acronym for LInux MULti-core Unified Supercomputer. The Limulus project goal is to create and maintain an open specification and software stack for a personal workstation cluster. Ideally, a user should be able to build or purchase a small personal workstation cluster using the Limulus reference design and low cost hardware. In addition, a freely available turn-key Linux based software stack will be created and maintained for use on the Limulus design. A Limulus is inteneded to be a workstation cluster platform where users can develop software, test ideas, run small scale applications, and teach HPC methods.[...]

And watch the hardware costs drop.

September 2007 Total: $2302 (US dollars) September 2008 Total:$2092 (US dollars)*

* includes RAM upgrade price from 1GB/node to 2GB/node

I still remember my Beowulf cluster in college.

## Haiti

January 14th, 2010

I turned on my television to see a city razed, a gruesome tapestry of survivors and corpses. A girl, buried under the rubble, was screaming for help; a group of men, after hours, managed to free her from the broken concrete. The reporter asked her if she had been scared, and, unconquerable, she told the camera “her heart never skipped a beat.” Many others felt no more, while those they left behind felt all too much.

Mother nature has no wrath, no anger, no hatred. She feels nothing, she is unadorned indifference. And, this makes her brute violence all the more terrifying. As Thomas Hardy put it,

If but some vengeful god would call to me
From up the sky, and laugh: “Thou suffering thing,
Know that thy sorrow is my ecstasy,
That thy love’s loss is my hate’s profiting!”
Then would I bear, and clench myself, and die,
Steeled by the sense of ire unmerited;
Half-eased, too, that a Powerfuller than I
Had willed and meted me the tears I shed.

But not so. How arrives it joy lies slain,
And why unblooms the best hope ever sown?
-Crass Casualty obstructs the sun and rain,
And dicing Time for gladness casts a moan….
These purblind Doomsters had as readily strown
Blisses about my pilgrimage as pain.

Yet, it is this very indifference and upheaval that burned into us the passion that fires our spirit and ingenuity. We have forged ahead thus far, and we will not go quietly into the night. And, when we see our neighbors shaken, we offer a hand to steady them. Cold indifference, be damned.

## Quickies: TLS reneg, Karmic, beauty, suggest, TC

November 12th, 2009

By now, everyone has heard of the TLS renegotiation vulnerability,

Transport Layer Security (TLS, RFC 5246 and previous, including SSL v3 and previous) is subject to a number of serious man-in-the-middle (MITM) attacks related to renegotiation. In general, these problems allow an MITM to inject an arbitrary amount of chosen plaintext into the beginning of the application protocol stream, leading to a variety of abuse possibilities. In particular, practical attacks against HTTPS client certificate authentication have been demonstrated against recent versions of both Microsoft IIS and Apache httpd on a variety of platforms and in conjunction with a variety of client applications. Cases not involving client certificates have been demonstrated as well. Although this research has focused on the implications specifically for HTTP as the application protocol, the research is ongoing and many of these attacks are expected to generalize well to other protocols layered on TLS.

A nice write-up of the issue can be found here,

Marsh Ray has published a new attack on the TLS renegotiation logic. The high level impact of the attack is that an attacker can arrange to inject traffic into a legitimate client-server exchange such that the TLS server will accept it as if it came from the client. This may allow the attacker to execute operations on the server using the client’s credentials (e.g., order a pizza as the client). However, the attacker does not (generally) get to see the response. Obviously this isn’t good, but it’s not the end of the world. More details below.

As for the response of a popular SSL/TLS implementation, the OpenSSL security advisory can be found here,

The workaround in 0.9.8l simply bans all renegotiation. Because of the
nature of the attack, this is only an effective defence when deployed
on servers. Upgraded clients will still be vulnerable.

A TLS extension has been defined which will cryptographically bind the
session before renegotiation to the session after. We are working on
incorporating this into 0.9.8m, which will also incorporate a number
of other security and bug fixes.

Oh, and about Tor,

The Tor protocol isn’t vulnerable here because 1) it doesn’t allow data to be sent before the renegotiation step, and 2) it doesn’t treat a renegotiation as authenticating previously exchanged data (because there isn’t any).

-

A couple of minor notes when upgrading Ubuntu Jaunty (9.04) to Karmic (9.10)…

Karmic no longer uses event.d.

The version of upstart included in Ubuntu 9.10 no longer uses the configuration files in the /etc/event.d directory, looking to /etc/init instead. No automatic migration of changes to /etc/event.d is possible. If you have modified any settings in this directory, you will need to reapply them to /etc/init in the new configuration format by hand.

This specification contains the documentation for the Upstart packaging and upgrade policy.

For example, for the purposes of djbdns, this meant that the “/etc/event.d/svscan” configuration had to be converted to “/etc/init/svscan.conf”. I ended up using the following, which is slightly different than what the Ubuntu djbdns package installs (bullet point 8 of this post is relevant).

# svscan - daemontools -- http://www.froyn.net/blosxom/blosxom.cgi/2007/1/12
#
# This service starts daemontools from the point the system is
# started until it is shut down again.

start on runlevel [2345]
stop on runlevel [!2345]

respawn
exec /usr/bin/svscanboot


And, rsyslog is now in use,

The sysklogd package has been replaced with rsyslog. Configurations in /etc/syslog.conf will be automatically converted to /etc/rsyslog.d/50-default. If you modified the log rotation settings in /etc/cron.daily/sysklogd or /etc/cron.weekly/sysklogd, you will need to change the new configurations in /etc/logrotate.d/rsyslog. Also note that the prior rotation configurations used .0 as the first rotated file extension, and now via logrotate it will be .1.

I played around a little with logging in Ubuntu 8.10 in this post (bullet point 7).

(On a side note, if you are doing more than just simple basics with syslog, it may be time to consider replacements like rsyslog and syslog-ng.)

-

Didn’t I mention beauty and elections at some point? Yep,

Perhaps a just as simple and maybe better way to pick who will be elected president than answering “who is taller?” is to answer this more general question – who best looks the part? (A little lamination goes a long way. ) And, of course, a consensus answer gives better results than each individual answer here.

Well, this paper fits right in with that post.

Are beautiful politicians more likely to be elected? To test this, we use evidence from Australia, a country in which voting is compulsory, and in which voters are given ‘How to Vote’ cards depicting photos of the major party candidates as they arrive to vote. Using raters chosen to be representative of the electorate, we assess the beauty of political candidates from major political parties, and then estimate the effect of beauty on voteshare for candidates in the 2004 federal election. Beautiful candidates are indeed more likely to be elected, with a one standard deviation increase in beauty associated with a 1½ – 2 percentage point increase in voteshare. Our results are robust to several specification checks: adding party fixed effects, dropping well-known politicians, using a non-Australian beauty rater, omitting candidates of non-Anglo Saxon appearance, controlling for age, and analyzing the ‘beauty gap’ between candidates running in the same electorate. The marginal effect of beauty is larger for male candidates than for female candidates, and appears to be approximately linear. Consistent with the theory that returns to beauty reflect discrimination, we find suggestive evidence that beauty matters more in electorates with a higher share of apathetic voters.

-

This made me laugh.

[...]But I was most impressed with this anonymous bit of genius dug up by Digg, which uses Google for some armchair sociolinguistic analysis. The graphic compares “less intelligent” queries with “more intelligent” queries, such as “how 2″ with “how might one:”

-

Lastly, TrueCrypt 6.3 has been with the following new features.

Full support for Windows 7.

Full support for Mac OS X 10.6 Snow Leopard.

The ability to configure selected volumes as ’system favorite volumes’. [...]

Oh, and I should mention… So, we had hotplug and cold boot. And, of course, when your system is up and running and transparently encrypting/decrypting data, a stock exploit of, say, the OS could easily mean game over. Now, people are using bootkit functionality against TrueCrypt.

The provided implementation is extremely simple. It first reads the first 63 sectors of the primary disk (/dev/sda) and checks (looking at the first sector) if the code there looks like a valid TrueCrypt loader. If it does, the rest of the code is unpacked (using gzip) and hooked. Evil Maid hooks the TC’s function that asks user for the passphrase, so that the hook records whatever passphrase is provided to this function. We also take care about adjusting some fields in the MBR, like the boot loader size and its checksum. After the hooking is done, the loader is packed again and written back to the disk.

While gaining physical access to a system and tampering with the hardware or popping on a malicious bootloader or what have you, and then the victim using that compromised system, which proceeds to, say, record the TrueCrypt password for the attacker, is outside the protections of the basic TrueCrypt FDE scenario, seeing bootkits demonstrating their capabilities against TrueCrypt and illustrating just what protections tools like TrueCrypt do and do not provide is quite cool. (And, with that horrific sentence, this post draws to its close.)

## Number

September 3rd, 2009

Like Whitman’s spider, here we cast a filament between a few reads, prompted by “Number”.

-

Somewhat in keeping with recent posts, I found Tobias Dantzig’s “Number” (fourth edition – 1953) an interesting tale of the evolution of the number concept that calls attention to things we so completely take for granted today, such as positional numeration (“The struggle between the Abacists, who defended old traditions, and the Algorists, who advocated the reform, lasted from the eleventh century to the fifteenth century and went through all the usual stages of obscurantism and reaction.”). Much of the book is fascinating, such as the following excerpts of possible interest to our readers.

(The human computer is quite different from that there laptop,)

The advantages of the base two are economy of symbols and tremendous simplicity in operations. It must be remembered that every system requires that tables of addition and multiplication be committed to memory. For the binary system these reduce to 1 + 1 = 10 and 1 X 1 = 1; whereas for the decimal, each table has 100 entries. Yet this advantage is more than offset by lack of compactness: thus the decimal number 4096 = 2^12 would be expressed in the binary system by 1,000,000,000,000.

(On the term cipher and the spread of the concept of zero,)

[...]In the English language the word cifra has become cipher and has retained its original meaning of zero.

The attitude of the common people toward this new numeration is reflected in the fact that soon after its introduction into Europe, the word cifra was used as a secret sign; but this connotation was altogether lost in the succeeding centuries. The verb decipher remains as a monument of these early days.

(Think infinite and calculus,)

The conflict between discrete and continuous is not a mere product of school dialectics: it may be traced to the very origin of thought, for it is but the reflection of the ever present discord between this conception of time as a stream and the discontinuous character of experience. For, in the ultimate analysis, our number concept rests on counting, i.e., on enumerating the discrete, discontinuous, interrupted, while our time intuition paints all phenomena as flowing. To reduce physical phenomena to number without destroying its streamlike character – such is the Herculean task of the mathematical physicist; and, in a broad sense, geometry too should be viewed as but a branch of physics.

(And here we take a trip…)

The latter brings to mind Virginia Woolf’s “Mrs. Dalloway” (1925),

All the same that one day should follow another; Wednesday, Thursday, Friday, Saturday; that one should wake up in the morning; see the sky; walk in the park; meet Hugh Whitbread; then suddenly in came Peter; then these roses; it was enough. After that, how unbelievable death was! – that it must end; and no one in the whole world would know how she had loved it all; how, every instant …

Of course, I note the following congruence between “Spent” and “Mrs. Dalloway”…

From Geoffrey Miller’s “Spent” (2009),

[...]Almost every dog breed has some idiosyncratic high-maintenance feature that makes it an effective means for displaying conscientiousness, and all require regular feeding, watering, walking, vet care, and diligent physical restraint by leashes and fences. Hence the social and sexual popularity of single people who can be seen walking dogs that are conspicuously well fed, well-groomed, well trained, nonneurotic, and nondead.[...]

From Woolf’s “Mrs. Dalloway”,

Then Clarrisa, still with an air of being offended with them all, got up, made some excuse, and went off, alone. As she opened the door, in came that great shaggy dog which ran after sheep. She flung herself upon him, went into raptures. It was as if she said to Peter – it was all aimed at him, he knew – “I know you thought me absurd about that woman just now; but see how extraordinarily sympathetic I am; see how I love my Rob!”

Now, Elena Ferrante’s “Days of Abandonment” is stylistically reminiscent of Woolf’s “Mrs. Dalloway”. As Septimus (and Clarissa) verge on dissolution in “Mrs. Dalloway,” Olga in “Days of Abandonment” gives us a view into the mind of someone on the brink of insanity.

From “Days of Abandonment” (2005 english translation),

[...]I tried again to open the door. I couldn’t do it. I leaned over, I examined the key closely. Finding the imprint of the old gestures was a mistake. I had to disengage them. Under the stupefied gaze of Ilaria, I brought my mouth to the key, tasted it with my lips, smelled the odor of plastic and metal. Then I grabbed it solidly between my teeth and tried to make it turn. I did it with a sudden jerk, as if I wished to surprise the object, impose a new statute, a different dispensation. Now we’ll see who wins, I thought, while a pasty, salty taste invaded my mouth. But I produced no effect, except the impression that, because the rotating movement of my teeth on the key wasn’t working, it was finding an outlet in my face, tearing it like a can opener, and my teeth were moving, were being unhinged from the foundation of my face, taking with them the nasal septum, an eye brow, an eye, and revealing the viscid interior of my head.

(Anon, through sane eyes, we find “The key turned in the lock simply.”)

Notable is the portrayal of insanity as a representation of the world constructed by a person’s mind that skews too far from the necessities of the reality around them, a representation incoherently blurred by memories and imaginations and hallucinations, a representation that makes their ability to survive, reproduce, function out there in the world quite difficult. (One could think of the economic bubble that recently occurred as insanity.)

Which somehow made me think, if people’s representations of the world could be assigned some sort of numerical metric and be plotted over the number of people with that representation, should we visualize a bell curve of a normal distribution? Ah, but, bell curves bring to mind means. Aristotle spoke of means; to that end, from “The Ethics of Aristotle” (1953 English translation),

In anything continuous and divisible it is possible to take the half, or more than the half, or less than the half. Now these parts may be larger, smaller, and equal either in relation to the thing divided or in relation to us. [...] The man who knows his business avoids both too much and too little. It is the mean he seeks and adopts – not the mean of the thing but the relative mean.

Of course, Aristotle makes me think of Greek philosophers. Coming back to “Number”, Dantzig quotes Philolaus, a Pythagorean, as saying,

All things which can be known have number; for it is not possible that without number anything can be either conceived or known.

Alas for the Pythagoreans, the irrationals revealed by the Pythagorean Theorem! Hippasus drowned! Dancing round the ruler and compass, what was it Eliot said in “The Hollow Men”,

Between the idea
And the reality
Between the motion
And the act

Moving us toward where we parted, William Blake in “Augeries of Innocence” now pops into mind,

To see a World in a grain of sand,
And Heaven in a wild flower,
Hold Infinity in the palm of your hand,
And Eternity in an hour.

Perhaps, as Wordsworth once wrote, “the world is too much with us”; but then we have these sublime writers, poets, mathematicians! Alas, the act is up; the thread must be caught; summer’s grand finale is at the door. So, we end with the close of Dantzig’s “Number”,

In this, then, modern science differs from its classical predecessor; it has recognized the anthropomorphic origin and nature of human knowledge. Be it determinism or rationality, empiricism or the mathematical method, it has recognized that man is the measure of all things, and that there is no other measure.

It’s turtles all the way down, dear reader, turtles all the way down!

## Violent rambling, etc.

August 7th, 2009

Behold! Behold!
What lo?
A ramble! A ramble!
What? No!
‘Tis so! ‘Tis so!

-

I think we often take for granted the hidden knowledge built into our cultures, our institutions, our norms, to the degree that we take the results of these structures, of this genetic code for our society, as just a given. I must say that, when I was younger, the more radical beliefs I held at the time were in part motivated by the accumulated noise, junk, I perceived in those structures about me while ignoring the information also contained therein; now, I have more respect for the hidden knowledge in such systems and the hidden assumptions on those systems and their knowledge.

So, I recently read Girard’s “Violence and the Sacred”. In it, Girard discusses the idea that all society, all culture, even all symbolic thought is the byproduct of what he calls a sacrificial crisis, which entails a cycle of reciprocal violence and violent undifferentiation within a community, that culminates in the elimination of a sacrificial victim, whereby the community unites against one member of the community and, in a generative and bonding moment, releases their violence upon that member, thus expelling their violence from the community. According to Girard, religion enshrines this sacrificial crisis and victim, and ritual allows for the re-enactment of the crisis and victim to expunge the violence within the community. (Girard’s, say, over the top, argument is that this crisis-victim is the foundation of all community and culture, and that trying to understand the crisis-victim is what caused symbolic thought to emerge. Side note – I mention symbolic thought with regards to the financial system in this post.)

Thinking on it, I can see the sacrificial crisis culminating in a sacrificial victim, and the ritualized enactment of such an event, applied within communities in the world in which I live today. As Girard discusses, while sacrificial victims may be selected at random in the midst of a crisis, surrogate victims used during the ritualized replay of the sacrificial crisis are often chosen both because they have ties to the community (and so can be substituted for it) and yet because they are outside it (and so have little potential to inspire reciprocal violence). When I look at, say, news of late, where “obese” people seem to be sacrificed in the war on health care costs, I cannot help but see parallels to the surrogate victim in primitive religious rituals (irrespective of any opinion held on the matter).

There is an interesting misdirection noted by Girard as well, the belief that violence stems from something outside of humankind, such as the gods or the dead. This too it seems is visible today in much rhetoric. People speak as if violence itself stems from a gun or from a video game or from a car, from these sacred and powerful items that are outside the world of people, external objects capable of infecting people with violence when they descend upon a community. (I am ignoring any correlation that may exist between these objects and people’s propensities toward violence.)

Regardless of views, the importance of violence, or rather, the expulsion of violence from, within the community appears to be hugely important to, and completely buried in, the very foundations of society as we know it today, at least in the culture in which I reside. This seem to be so much the case, that we take the end result for granted.

For example, Girard points out the significance of the judicial system in replacing religion, such that the judicial system takes on a higher authority as a impartial and superior body, that metes out revenge, relabeled a transcendent term, justice, upon parties of established guilt in the name of the community. The very sublime nature of the institution makes the act of revenge the judicial system deals out beyond revenge, and so it breaks the potential chain of reciprocal violence.

We often take the incorporation of the judicial institution into our culture for granted, as we assume such institutions and such cultures are a given, natural. But, violence has been with us long before such institutions existed. Nature is violent at times, and it makes sense that we have evolved a potential for violence ourselves; we have survived over the long course of time both in spite of and because of violence. As such and in broader terms, I am beginning to think that taking current culture, current norms, current institutions for granted is quite dangerous.

This idea of the generative and community building aspects of violent unanimity also seems in line with Peter Turchin’s ideas “War and Peace and War” (mentioned in this post, particularly bullet point 2) that a common struggle against, say, a foreign enemy can bring people together, that it builds the capacity for collective action. Great nations are forged in the frontier life, in the world of violent interactions between similar peoples with less similar peoples. In this light, Girard briefly notes wars with foreign nations as an example of unifying violence, in that the foreign nation becomes the sacrificial victim, the embodiment of violence that is outside the community ,and yet a violence that has been brought to the community and must be violently expunged.

Girard notes that change is often feared by primitive communities as a potential trigger of violence, and often there is lots of ritual performed around change to relieve any potential for violent buildups, such as at times of seasonal transitions or the coming of age of a child. In bullet point 2 of the aforementioned post, I noted that “radical change can cause instability” along with a slightly broader discussion of change or adaption in a follow-up post. Additionally, Schoeck notes in Envy that extreme envy and envy avoidance, such as that in exhibited in primitive communities and small towns, can often stifle innovation, creativity, and achievement, all of which have undertones of change. So, combining this with Girard, we can see envy threatens violence, and, violence being contagious, such a threat could wipe out a whole community, which leads us to innovation, creativity, and achievement, as purveyors of change, being punished and avoided.

Now, it is easy to forget walking down the street in NYC just how much everything around us requires this breaking of the chain of reciprocal violence. In the past, when men could easily descend into a tornado of violence and everything else was swept aside by it, the world as we see it now in NYC could not exist. The weak would have to huddle together and perhaps flee the storm, and the strong could tear everything down in a moment of rage.

That is not to say that violence is not still present, but that most people tend towards non-initiation of violence, and the rare initiation of violence tends to end at the time it begins. Self-defense is acceptable at the moment of being attacked, but, after the fact, we rely on law enforcement and the judicial system to catch and punish the aggressor; vigilante-ism and personal revenge are frowned upon and punished, norms that reinforce a breaking of reciprocal violence. (In some communities in NYC and the USA in general, reciprocal violence may still run free, such as gang violence or the eruption of riots; however, those are the exceptions and not the norm.)

Evolution is the way of nature and, as such, ourselves, and I rather enjoy this spiraling progression, much as I like the evolution of what I consider to be my self as I grow older. As noted by Turchin’s “War and Peace and War” or in Michael Flynn’s scifi-esque “Introduction to Cliology” (inspired by Asimov) or even Barrow’s “The Artful Universe”, we live in cycles within cycles, in this chaotic nature, and so we must. But, I maintain a certain sense of caution when unraveling the fabric of our modern world, as pulling at the strings of our norms, cultures, institutions, communities, etc. without properly considering their hidden store of knowledge, and the hidden assumptions one might be making, can have quite profound and completely unintended consequences, for better or worse.

-

I mentioned “the map is not the territory” in this post. How about another? “Correlation is not causation.” This is one of those great insights that we so often fail to see. I have made this mistake at times in this blog.

So, if I had remembered nothing else from Judith Rich Harris’ “The Nurture Assumption”, then the wit, and the clear and concise explanation of “correlation is not causation” that makes the concept easy for me to explain to others, would have made the book worth it. An excerpt of her fictional example to illustrate correlation is not causation,

[...]Our method will be straightforward: we will ask a large number of middle-aged people how much broccoli they consume and then, five years later, check to see how many of them are still alive.[...]

[fictional results showing a statistically significant correlation between eating broccoli and longevity in men but not women]

Our study appears in an epidemiological journal. A newspaper reporter happens to read it. The next day there’s a headline in the paper: EATING BROCCOLI MAKES MEN LIVE LONGER, STUDY SHOWS.

But does it? Does the study show that eating broccoli caused the male subjects to live longer? Men who eat broccoli may also eat a lot of carrots and brussel sprouts. They may eat less meat or less ice cream than broccoli shunners. Perhaps they are more likely to exercise, more likely to buckle their seatbealts, less likely to smoke. Any of these other lifestyle factors, or all of them together, may be responsible for the longer lives of the broccoli eaters. Eating broccoli might even have been shortening our subjects’ lives, but this effect was outweighed by the beneficial effects of all the other things broccoli eaters were doing.

-

The 0.2.1.x branch of Tor has gone to release.

Tor 0.2.1.18 lays the foundations for performance improvements, adds status events to help users diagnose bootstrap problems, adds optional authentication/authorization for hidden services, fixes a variety of potential anonymity problems, and includes a huge pile of other features and bug fixes.

Bravo.

-

Attacks only get better.

In this paper we describe several attacks which can break {\it with practical complexity} variants of AES-256 whose number of rounds are comparable to that of AES-128. One of our attacks uses only two related keys and $2^{39}$ time to recover the complete 256-bit key of a 9-round version of AES-256 (the best previous attack on this variant required 4 related keys and $2^{120}$ time). Another attack can break a 10 round version of AES-256 in $2^{45}$ time, but it uses a stronger type of {\it related subkey attack} (the best previous attack on this variant required 64 related keys and $2^{172}$ time). While neither AES-128 nor AES-256 can be directly broken by these attacks, the fact that their hybrid (which combines the smaller number of rounds from AES-128 along with the larger key size from AES-256) can be broken with such a low complexity raises serious concern about the remaining safety margin offered by the AES family of cryptosystems.

## Hash competition round 2 candidates announced, etc.

July 24th, 2009

And then there were 14…

The round two candidates have been selected for the NIST cryptographic hash algorithm competition. From the competition’s email list,

Date: Fri, 24 Jul 2009 09:36:00 -0400
From: “Burr, William E.”
To: Multiple recipients of list

NIST received 64 SHA-3 candidate hash function submissions and accepted 51 first round candidates as meeting our minimum acceptance criteria. We have now selected the following 14 second round candidates to continue in the competition:[...]

Noticeably absent is MD6, which the MD6 team foreshadowed earlier this month on the competition’s email list.

Date: Wed, 1 Jul 2009 10:55:30 -0400
From: “Ronald L. Rivest”
To: Multiple recipients of list
Subject: OFFICIAL COMMENT: MD6

[...]
Thus, while MD6 appears to be a robust and secure cryptographic hash algorithm, and has much merit for multi-core processors, our inability to provide a proof of security for a reduced-round (and possibly tweaked) version of MD6 against differential attacks suggests that MD6 is not ready for consideration for the next SHA-3 round.
[...]

I noted the competition in this old post.

Update: NIST has issued a report on the first round of the hash competition. Additionally, the round 2 candidate submission packages have been updated if tweaks were made to these packages by their authors.

Update: From the hash competition mailing list,

Date: Mon, 1 Feb 2010 12:10:48 -0500
From: “Chang, Shu-jen H.”
To: Multiple recipients of list
Subject: Call for Papers for the Second SHA-3 Candidate Conference

FYI, attached is the Call for Papers for the Second SHA-3 Candidate Conference, to be held at UCSB after Crypto and CHES 2010. Please note that the submission deadline is May 10, 2010, and submissions should be sent to hash-function@nist.gov.

Regards,
Shu-jen

From the CFP,

Call for Papers for the Second SHA-3 Candidate Conference
Santa Barbara, CA
August 23-24, 2010
Submission deadline: May 10, 2010 (Conference without proceedings)
The SHA-3 competition has entered the second round, in which 14 second-round candidate algorithms are being considered for SHA-3. NIST plans to host a Second SHA-3 Candidate Conference in August, 2010 to discuss various aspects of these candidates, and to obtain valuable feedback for the selection of the finalists soon after the conference.

The web page for the second conference is here.

-

On a side note, at the beginning of the month, NIST released a document trying to elicit discussion on algorithm transition strategies and timelines as we approach the deprecation of 80 bit crypto stacks for Federal agency purposes as well as the world at large. Per the announcement,

Comments are requested on the white paper “The Transitioning of Cryptographic Algorithms and Key Sizes” by August 3, 2009. Please provide comments to CryptoTransitions@nist.gov.

I actually enjoyed skimming this document. Besides the broad transition discussion, this document really provides useful summaries of the various cryptographic algorithms allowed for FIPS 140 purposes and also in common use out there in the world. It pulls together concise use-oriented descriptions of the various algorithms and the documents that discuss those algorithms, including the normal uses of these algorithms and their strengths for these uses. We often talk about crypto stacks, and this document makes it easy to see a crypto stack. Also, it makes sense that this topic is being hashed out prior to FIPS 140-3.

Update: NIST posted the comments received up to 2009-07-24 on the transition paper. This was updated to comments received through 2009-08-14.

-

Rather than creating a new post, I figured I would add these here.

NIST has published a draft summary of the Cryptographic Key Management Workshop. As per the announcement,

NIST announces that the Draft NIST Interagency Report 7609, Cryptographic Key Management Workshop Summary (June 8-9, 2009), is available for public comment. The Cryptographic Key Management (CKM) workshop was initiated by the NIST Computer Security Division to identify and develop technologies that would allow organizations to leap ahead of normal development lifecycles to vastly improve the security of future sensitive and valuable computer applications. The workshop was the first step in developing a CKM framework. This summary provides the highlights of the presentations, organized by both topic and by presenter.[...]

NIST has published a draft of SP 800-38E, dealing with NIST approval of the XTS mode of operation for AES, for comment. As per the announcement,

NIST announces that the Draft NIST Special Publication 800-38E, Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Block-Oriented Storage Devices, is available for public comment. This document approves the XTS-AES mode of the AES algorithm by reference to IEEE Std 1619-2007, subject to one additional requirement, as an option for protecting the confidentiality of data on block-oriented storage devices. This mode does not provide authentication, in order to avoid expansion of the data; however, it does provide some protection against malicious manipulation of the encrypted data.

Update: NIST has published SP 800-56B, as described here.

NIST announces the completion of Special Publication (SP) 800-56B, Recommendation for Pair-Wise Key Establishment Schemes Using Integer Factorization Cryptography. This Recommendation provides the specifications of key establishment schemes that are based on a standard developed by the Accredited Standards Committee (ASC) X9, Inc.: ANS X9.44, Key Establishment using Integer Factorization Cryptography. SP 800-56B provides asymmetric-based key agreement and key transport schemes that are based on the Rivest Shamir Adleman (RSA) algorithm.

NIST has published SP 800-102 and SP 800-120, as described here.

NIST announces the completion of Special Publication 800-102, Recommendation for Digital Signature Timeliness. Establishing the time when a digital signature was generated is often a critical consideration. A signed message that includes the (purported) signing time provides no assurance that the private key was used to sign the message at that time unless the accuracy of the time can be trusted. With the appropriate use of digital signature-based timestamps from a Trusted Timestamp Authority (TTA) and/or verifier-supplied data that is included in the signed message, the signatory can provide some level of assurance about the time that the message was signed.

The National Institute of Standards and Technology (NIST) is pleased to announce the release of Special Publication 800-120. Recommendation for EAP Methods Used in Wireless Network Access Authentication. This Recommendation formalizes core security requirements for EAP methods when employed by the U.S. Federal Government for wireless authentication and key establishment.

## Quickies – FIPS 186-3 etc., other notes, metro, etc.

June 11th, 2009

Just a few quick and mostly old notes accumulated over the last few months…

FIPS 186-3 has been released, as announced here.

This notice announces the Secretary of Commerce’s approval of Federal Information Processing Standard (FIPS) Publication 186–3, Digital Signature Standard (DSS). FIPS 186–3 is a revision of FIPS 186–2. The FIPS specifies three techniques for the generation and verification of digital signatures that can be used for the protection of data: the Digital Signature Algorithm (DSA), the Elliptic Curve Digital Signature Algorithm (ECDSA) and the Rivest-Shamir-Adelman (RSA) algorithm. Although all three of these algorithms were approved in FIPS 186–2, FIPS 186–3 increases the key sizes allowed for DSA, provides additional requirements for the use of RSA and ECDSA, and includes requirements for obtaining the assurances necessary for valid digital signatures. FIPS 186–2 contained specifications for random number generators (RNGs); this revision does not include such specifications, but refers to NIST Special Publication (SP) 800–90 for obtaining random numbers.

[...]FIPS 186–3 allows the use of 1024, 2048 and 3072-bit keys. Other requirements have also been added concerning the use of ANS X9.31 and ANS X9.62. In addition, the use of the RSA algorithm as specified in Public Key Cryptography Standard (PKCS) #1 (RSA Cryptography Standard) is allowed.

In the announcement, some of the changes between the last draft and the final document are covered in a comments received and NIST response format. I found the most interesting NIST responses to be…

This permits the use of a public key algorithm and key size that is stronger in security than a hash algorithm, so long as both provide sufficient security for the digital signature process. The use of hash algorithms that provide equivalent or stronger security than the public key algorithm and key size is still encouraged as a general practice.

NIST studied the suggestion and decided not to impose further restrictions on the selection of the public exponent e. Such restrictions would negatively impact NIST’s Cryptographic Module Validation Program (CMVP) by precluding the validation of currently accepted implementations without providing a significant increase in security.

NIST reviewed the comments and made the appropriate changes to ensure alignment with respect to the generation and management of ECDSA domain parameters. NIST deleted the statement ‘‘ANSI X9.62 has no restriction on the maximum size of [the cofactor]’’, since the current version of X9.62 imposes limitations on the size of the cofactor. NIST also revised statements regarding elliptic curve domain parameter generation for purposes other than digital signature generation.

I commented on an early draft here and mentioned a later draft here.

Update:Annex A and Annex C of FIPS 140-2 have been updated to reference FIPS 186-3.

The algorithm testing tool has been updated to include testing for FIPS 186-3 algorithms, with some exceptions. As found the announcements,

New release of the CAVS algorithm validation testing tool to the CST Laboratories (CAVS8.0). This version of the CAVS tool activates the FIPS 186-3 DSA2 validation testing with the exception of generation and validation of provably prime domain parameters p and q and canonical generation and validation of domain parameter g. It also requires the IUT to specify the assurances necessary for valid digital signatures specified in FIPS 186-3.

The exceptions to the algorithm testing tool are vendor affirmed for now, as per an update to the FIPS 140-2 Implementation Guidance.

Validation testing for FIPS 186-3, Digital Signature Standard (DSS) is separated into the three digital signature algorithms. Validation testing is available for FIPS 186-3 DSA, with the exception of the domain parameter generation and validation method listed above. These methods, along with FIPS 186-3 ECDSA and RSA, will require vendor affirmation until validation testing is available in the CAVS tool.

Update: NIST has released a new version of the algorithm testing tool (8.1) that “addresses several minor modifications and enhancements to CAVS including the Addition of a cover letter template, the addition of more efficient elliptic curve routines for NIST binary (e.g.., B-163 and K-571) curves, and the modification of several minor issues.”

-

Speaking of FIPS, the FIPS 140-2 implementation guidance (IG) has been updated over the last few months. There has been much in the way of important and clarifying guidance, translating both “FIPS lore” and the forward thinking on algorithms/protocols as well as revs to FIPS 140 into the current requirements.

New Guidance
o04/01/09: 3.2 Bypass Capability in Routers
o04/01/09: 9.5 Module Initialization during Power-Up
o03/24/09: 7.9 Procedural CSP Zeroization
o03/10/09: 1.14 Key/IV Pair Uniqueness Requirements from NIST SP 800-38D
o03/10/09: 5.3 Physical Security Assumptions
o03/10/09: 7.8 Key Generation Methods Allowed in FIPS Mode

Modified Guidance
o03/10/09: G.1 Request for Guidance from the CMVP – Updated NIST POC.
o03/10/09: G.5 Maintaining validation compliance of software or firmware cryptographic modules – Updated references to firmware and hybrid modules.
o03/10/09: G.13 Instructions for completing a FIPS 140-2 Validation Certificate – Updated examples.
o03/10/09: 1.9 Definition and Requirements of a Hybrid Cryptographic Module – Updated to include hybrid firmware modules.
o03/10/09: 7.1 Acceptable Key Establishment Protocols – For Key Agreement; added the KDF specified in the SRTP protocol (IETF RFC 3711) is allowed only for use as part of the SRTP key derivation protocol. For Key Transport; wrapping a key using the GDOI Group Key Management Protocol described in the IETF RFC 3547.

I don’t think any of these are a surprise, but reading through it might be useful. Much has been set done cleanly here, with the possible exception of IG 9.5, which seemed more confusing than clarifying to me, but maybe that was a case of my (unfortunately) “knowing” too much about the FIPS world. Anyway, the changes to or additions of IGs 1.9, 7.1, 7.8, and 1.14 are perhaps of the most interest.

Since I have been coming at things from a more deployment side these days, I think IG 5.3 helps to illustrate how to interpret FIPS 140-2 validation results as applied to the selection and deployment of FIPS modules.

The four physical security levels of FIPS 140-2 are focused on the protection of the modules CSPs by the module itself independent of the environment the module is deployed. Therefore selection of a security level is greatly influenced by the environment the module is to be deployed. At a Level 1 security level, which does not itself provide physical security protection, in the right environment, may be an acceptable solution because the environment provides the required physical security protection features.

In this same deployment regard, IG 1.14 makes the following note.

2. The standard sets the minimum security requirements. The buyer is free to demand that the module allows for longer names. Users should be smart enough to name their modules in such a way that name collisions become extremely rare.

Also, for the old timers, the following was an area of debate back in the day. Now it is officially documented in an IG 7.8.

To be used in FIPS mode, a secret value K can be any value of the form:

K = U XOR V, (1)

where the components U and V are of the same length as K14, are “independent” of each other, and U is derived, possibly using a qualified post-processing (see below), from the output of an approved RNG in the module that is generating K. In addition, each component may be a function of other values (e.g., U = F(U’), or V = F(V’)).

The security strength of the generated value K is equal to the larger of the security strengths of U and V. In general, the security strength of K is determined by the security strength of U, and the security strength of U is the minimum of the length of U (and K) and the security strength of the RNG used to generate U. Therefore, the length of U (and K), and the security strength of the RNG used to generate U shall meet or exceed the security strength required for K. However, a vendor can claim that the security strength of the generated value K is determined by the security strength of V if it can be demonstrated that V has a higher security strength than U.

Update: NIST released revised implementation guidance. Besides “certificate annotation examples” added to a few sections, there was the following modification.

08/04/09: 7.1 Acceptable Key Establishment Protocols – For Key Agreement; removed the KDF specified in the SRTP protocol (IETF RFC 3711). For Key Transport; added reference to EAP-FAST and PEAP-TLS.

-

Since I am on this topic, I guess a quick note that a number of new revs of the CAVS were released in months prior is in order. Most were bugfixes and tweaks, but there were some additions to supported algorithms. E.g.,

[04-15-09] — New release of the CAVS algorithm validation testing tool to the CST Laboratories (CAVS7.4). This version of the CAVS tool adds the capability to test DRBG (NIST SP 800-90) implementations that do not have the optional reseed function.[...]

[03-12-09] — New release of the CAVS algorithm validation testing tool to the CMT Laboratories (CAVS7.1). In addition to minor modifications to enhance the tool, Version 7.1 of the CAVS tool adds testing for the draft of FIPS 186-3, Digital Signature Standard, Digital Signature Algorithm and file formating changes to the NIST SP 800-90 (DRBG) files to make them more similar to those used for other algorithms. As of the CAVS 7.1 release date, FIPS 186-3 is still in draft. No validations will be processed for this algorithm.

Update: NIST released the CAVS Management Manual. As per the announcements,

[08-17-09] — Posting of the CAVS Management Manual. The purpose of the CAVP Management Manual is to provide effective guidance for the management of the CAVP and other parties in the validation process.[...]

-

Switching direction, upgrading Gnome from 2.24 to 2.26 on FreeBSD through the ports went mostly smoothly. As always, follow the FAQ. I had been using xulrunner for a while, but I was happy to see this change for the removal for FF2.

A FreeBSD port of libxul-1.9 has been added as an alternative Gecko provider to Firefox 2. This can be used by setting WITH_GECKO=libxul in /etc/make.conf.

-

Moving along, this gives me deja vu, yet made me laugh. (Please consider avoiding that previous link and the first link in the following blockquote’d text if you mind profanity.)

[...]Listen to his comedy bit about the DC metro system, which is he says is “the [...] Tron compared to NYC subways[...]

Having lived both in NYC and DC metro areas and having ridden both the NYC subway and the DC metro quite a bit, I think there are two points to make about the DC Metro system.

1. The DC Metro is minor in scope compared to the NYC subway. Not only is it a fraction of the size moving a fraction of the people, but it is only part-time.
2. The DC Metro system is designed like an X, and much of that X only has a single track going in each direction. So, if anything goes wrong in your part of the X or the center of the X, well, lets just say you are out of luck.

Now, because of point 2, even if DC blows up in size and develops a late-night night-life, it would be quite difficult to do away with point 1 in any significant fashion without some fundamental design changes. Even if you could magically do away with unexpected problems in the system (and, yes, that would have to include rainy and snowy days), you are stuck juggling trains between single tracks best case or shutting down parts of the line and running shuttle buses worst case for basically any line maintenance, significantly impacting service on those parts of the system and on the overall system as you move closer to the center of the X. This becomes even more acute due to the increased wear and tear that lovely, underground center of the X suffers, especially as you ramp up service to handle more people or longer hours.

The NYC subway system, by contrast, looks like a bunch of parallel lines that crisscross here and there, and these multitudes of lines generally have multiple tracks supporting each direction. While there may be sparse coverage at some points (e.g., parts of Queens come to mind) and a large concentration of lines at other points (e.g., Manhattan), you still tend to end up with lots of independent enough ways to route around much of the maintenance work and unexpected problems such that the effect on the overall system tends to be quite limited and interruptions to the particular parts undergoing work or suffering issues can be mitigated/minimized (e.g., the handling of maintenance work on the 7 line happening right now), even in spite of 24/7 operation.

Like so much else that varies between NYC and its neighbors, there is a big difference in the scale/scalability and availability requirements of public transportation for an enormous 24/7 city accustomed to public transit and a large 18/7 city accustomed to the beltway. So, when it comes down to how to spend limited resources, one gets beauty sleep, the other keeps on keeping on.

-

Finally, it has been a while since I have pointed out notably pleasant customer service experiences. So, here is some praise for the LIRR cleaning crew in Penn Station left in a comment.

To Whom It May Concern, I would like to praise all the cleaning men and women who work in the station. They all do a fantastic job on keeping the station clean.

## Quick note – a crack at building TC 6.1a on FBSD 7.1

March 29th, 2009

Prompted by a comment and this rainy day…

Myself, I mostly think of TrueCrypt as Windows software today, so I can’t say I use it much on FreeBSD. However, I haven’t had much trouble getting TrueCrypt 6.1a to build on FreeBSD 7.1, even though I may not use the end result.

So, starting off with a pretty much stock FreeBSD 7.1 system with Gnome 2.24 installed as my test environment, I was able to do something like the following to get TrueCrypt 6.1a to build from source.

I downloaded, verified, and extracted the TrueCrypt 6.1a source.

I took a look at the “Readme.txt” in the root of the extracted TrueCrypt source tree and ensured that I met the requirements listed in the “Requirements for Building TrueCrypt for Linux and Mac OS X” section. I found the bulk of what may be missing (I say “may be missing” because most dependencies were already met in this test setup) readily available in the FreeBSD ports tree (e.g., gmake, fusefs-kmod, fusefs-libs, pkg-config, wxgtk). I had PKCS#11 header files lying around, but I also tried downloading fresh ones (e.g., pkcs11.h, pkcs11f.h, pkcs11t.h) from the RSA ftp site.

One important note here – the FreeBSD ports has split wxWidgets into multiple versions/flavors. After glancing at the root TrueCrypt makefile, I decided to go with “wxgtk28-unicode” from the FreeBSD ports for the TrueCrypt build. This choice has later implications, as the TrueCrypt build defaults to the use of the generic “wx-config”, which will not exist when using one of the wxWidgets from the FreeBSD ports, since “wx-config” is renamed in accordance with the specific version/flavor of the installed FreeBSD port. While I didn’t notice the “Readme.txt” calling out that this can be tweaked, the root TrueCrypt makefile is clear here and the choice of “wx-config” can be conveniently overridden, as noted next.

With the dependencies done, I then followed the build instructions for Linux and Mac OS X in the “Readme.txt”, interpreting them for my environment. For my particular test setup, there were three main tweaks here – use “gmake” (GNU make) rather than “make” (the stock FreeBSD make), set “WX_CONFIG” appropriately (since I was using “wxgtk28-unicode”, I set this to “wxgtk2u-2.8-config” overriding the default “wx-config”), and set “PKCS11_INC” appropriately since the PKCS#11 header files I wanted to use were not in the default include paths used by the TrueCrypt build (I pointed this to the directory where I had the PKCS#11 header files). So, from the root directory of the TrueCrypt source tree, I ran something like “gmake WX_CONFIG=wxgtk2u-2.8-config PKCS11_INC=<path to downloaded header files>”.

After building successfully and prior to running the resulting “truecrypt” executable (found in “./Main” in the source tree), I checked to see if the FUSE kernel module was loaded. It wasn’t, so I loaded it (e.g., “sudo kldload /usr/local/modules/fuse.ko”).

Then I ran “truecrypt”. I ran its crypto tests, created a new TrueCrypt container with a UFS filesystem, and mounted, dismounted, and created/modified/deleted some files on this container. Everything appeared to be working.

And that is about all I can say about TrueCrypt 6.1a on FreeBSD 7.1.

## Notes – ff3, -n, ps, misc tools, lsof/fstat, etc

March 17th, 2009

Since there was some interest in the previous post, here are a few more notes and tips. Most often I forget to mention those standard tools we all take for granted, so I threw in some of those in as well. Oh, and I won’t be keeping up this extremely rapid posting rate (i.e., 2 in about week). (Note: If I played with any of what is written here, it was on Ubuntu 8.10 or FreeBSD 7.1.)

1. By default in Firefox 3, when opening Tools->Add-ons, Firefox sends queries to *.mozilla.com in order to populate the “Get Add-ons” pane. This pane and its “paneful” behavior can be disabled by setting the “extensions.getAddons.showPane” preference to “false” through, say, “about:config”.
2. Using the “-n” option is your friend for listing network address information with utilities like “lsof” (e.g., “lsof -n”), “netstat” (e.g., “netstat -an”), and “iptables” (e.g., “iptables -n –list”), as it prevents the resolving of numeric network addresses, which can be annoying, revealing, and time consuming.
3. Dislike when ps truncates rather than wraps? The “w” (wide output – 132 columns) or “ww” (unlimited width) option is your friend. If you need finer-grained control than that for some reason, you could try setting a “COLUMNS” environment variable in your shell to override the width determined by ps automatically. Or, in a nod to a later bullet point in this post, you may have additional ps options, like the “–cols” or “–columns” options that can be used to specify these settings in the GNU/Linux ps world.
4. In much of the *n[iu]x world, if you want to know how many lines, words, or bytes are in a file (or stdin), “wc” is often where its at.
5. Want to log to syslog using a command from a shell in much of the *n[iu]x world? Try “logger”.
6. In the much of the *n[iu]x world, ldd can be used to list the shared object (i.e., dynamically linked library) dependencies of a binary. (A question of what stock commands there are available to do this came up on a non-technical mailing list I am on recently.)
7. On GNU/Linux, lsof can be used to list open files, and it is quite a powerful utility. (A question of what stock commands there are available to do this came up on a non-technical mailing list I am on recently.) For example, “lsof +L 1″ will list all open files that have a zero link count (e.g., open files that are unlinked from the file system – this is useful to play with if you have ever wondered why you can upgrade packages/ports in place that have currently running applications).

Now, on GNU/Linux, you can dig into “/proc”, which contains a wealth of kernel information. Browsing the contents of “/proc/[pid]” can be used to access the process information for a particular process, and, coming back to our example, this includes the contents of files open by that process – e.g., “/proc/[pid]/fd/[fd]” for files open by the process and “/proc/[pid]/exe” for the executable of the process itself, both of which can be used to access the contents of files open by the process with a zero link count.

On FreeBSD, fstat can be used to list open files; however, it does not have quite the same flexibility as lsof. For example, listing open files with a zero link count is not simply a matter of command line options to fstat, although you could hack a script together utilizing fstat with other standard tools available on FreeBSD, hopefully unlike the following.

 #!/bin/sh # This is worse than the worst kludge. Do not use it. dofind() { # This find is ugly and potentially huge. Not to mention, pkill could # run amuck. Perhaps you could use info gathered from, say, # "ps wwe -p $pid -o command=" to help refine these searches, # but, like I said, all of this herein is worse than the worst kludge. find -x$mount -inum $inum -exec pkill -SIGINT -f -n -g 0 -s 0 \"find -x$mount -inum $inum -exec pkill\" \; 2>/dev/null return$? } doout() { echo "$cmd$pid $user$fd $szdv$mode $rw$inum $mount$1" [ $1 ] || ps ww -p$pid -o command= 2>/dev/null } fstat_unlink() { fstat $* 2>/dev/null | while read user cmd pid fd mount inum mode szdv rw junk; do case$fd in # We have the header "FD") doout " CommandWithArgs" ;; # We have a special [!0-9]*) dofind [ $? -eq 1 ] && doout ;; # We have a standard except non-inodes *[0-9]) dofind [$? -eq 1 ] && doout ;; esac done } fstat_unlink \$* 

Now, FreeBSD has variants of /proc (procfs and linprocfs), but these are not quite as free with the information as on Linux. This means our example of extracting information from open but unlinked files may be a bit more involved; however, you can still fall back on standard tools on FreeBSD to do it, such as fsdb and dd. For example, once you have a list of the inodes and devices for unlinked files, then you can access FFS inode data with “fsdb -r” using its “inode” and “blocks” CLI commands to get the disk blocks of the relevant inodes, and then use “dd if= bs= count= skip=” to dump the disk blocks associated with the inode, perhaps into a file.

Or, you could just write some code. However, if you dig into the FreeBSD ports, you will find that lsof is available there. As are tools/toolkits like foremost and sleuthkit. No need to reinvent the wheel.

8. Little tweaks to common system applications can be tricky business in the *n[iu]x world. For example, take the stock find on Ubuntu GNU/Linux (8.10) and FreeBSD (7.1). In this particular GNU/Linux world, our find comes with a “-quit” action that can be used to exit find after it has found particular results; however, in this particular BSD Unix world, we find that our find does not support “-quit”, yet we can combine pkill with the “-exec” primary/action of find to have a “find” process kill itself after it has found particular results. A form of this use is illustrated by the kludge script above. Also, being specifically tied to this particular BSD Unix world, that same kludge script uses “-x” over the equivalent “-xdev”, but, in this particular GNU/Linux world, find provides solely “-xdev”. Yet another example, the find in this particular BSD Unix world has the “-X” option to allow for safe use with xargs, but the find in this particular GNU/Linux world sticks with the good old “-print0″ and “xargs -0″ technique.

With all this lovely variation, it is often good practice to at least know if not use the more standard, common, portable variants of commands and their options, as you never know where you may find yourself or your scripts ending up.

9. Having worked with iptables quite a bit of late has served to remind me of the elegance of pf.
10. bangbang, baby. shell-fu.