In free software, there is a fairly smooth continuum between purely internal discussions and public relations statements. This is partly because the target audience is always ill-defined: given that most or all posts are publicly accessible, the project doesn't have full control over the impression the world gets. Someone—say, a slashdot.org editor—may draw millions of readers' attention to a post that no one ever expected to be seen outside the project. This is a fact of life that all open source projects live with, but in practice, the risk is usually small. In general, the announcements that the project most wants publicized are the ones that will be most publicized, assuming you use the right mechanisms to indicate relative newsworthiness to the outside world.
For major announcements, there tend to be four or five main channels of distribution, on which announcements should be made as nearly simultaneously as possible:
Your project's front page is probably seen by more people than any other part of the project. If you have a really major announcement, put a blurb there. The blurb should be a very brief synopsis that links to the press release (see below) for more information.
At the same time, you should also have a "News" or "Press Releases" area of the web site, where the announcement can be written up in detail. Part of the purpose of a press release is to provide a single, canonical "announcement object" that other sites can link to, so make sure it is structured accordingly: either as one web page per release, as a discrete blog entry, or as some other kind of entity that can be linked to while still being kept distinct from other press releases in the same area.
If your project has an RSS feed, make sure the announcement goes out there too. This may happen automatically when you create the press release, depending on how things are set up at your site. (RSS is a mechanism for distributing meta-data-rich news summaries to "subscribers", that is, people who have indicated an interest in receiving those summaries. See http://www.xml.com/pub/a/2002/12/18/dive-into-xml.html for more information about RSS.)
If the announcement is about a new release of the software, then update your project's entry on http://freshmeat.net/ (see הסעיף שנקרא “Announcing” about creating the entry in the first place). Every time you update a Freshmeat entry, that entry goes onto the Freshmeat change list for the day. The change list is updated not only on Freshmeat itself, but on various portal sites (including slashdot.org) which are watched eagerly by hordes of people. Freshmeat also offers the same data via RSS feed, so people who are not subscribed to your project's own RSS feed might still see the announcement via Freshmeat's.
Send a mail to your project's announcement mailing
list. This list's name should actually be "announce",
that is, announce@yourprojectdomain.org
,
because that's a fairly standard convention now, and the
list's charter should make it clear that it is very
low-traffic, reserved for major project announcements.
Most of those announcements will be about new releases of
the software, but occasionally other events, such as a
fundraising drive, the discovery of a security
vulnerability (see
הסעיף שנקרא “Announcing Security Vulnerabilities”)
later in this chapter, or a
major shift in project direction may be posted there as
well. Because it is low traffic and used only for
important things, the announce
list
typically has the highest subscribership of any mailing
list in the project (of course, this means you shouldn't
abuse it—consider carefully before posting). To
avoid random people making announcements, or worse, spam
getting through, the announce
list must
always be moderated.
Try to make the announcements in all these places at the same time, as nearly as possible. People might get confused if they see an announcement on the mailing list but then don't see it reflected on the project's home page or in its press releases area. If you get the various changes (emails, web page edits, etc.) queued up and then send them all in a row, you can keep the window of inconsistency very small.
For a less important event, you can eliminate some or all of the above outlets. The event will still be noticed by the outside world in direct proportion to its importance. For example, while a new release of the software is a major event, merely setting the date of the next release, while still somewhat newsworthy, is not nearly as important as the release itself. Setting a date is worth an email to the daily mailing lists (not the announce list), and an update of the project's timeline or status web page, but no more.
However, you might still see that date appearing in discussions elsewhere on the Internet, wherever there are people interested in the project. People who are lurkers on your mailing lists, just listening and never saying anything, are not necessarily silent elsewhere. Word of mouth gives very broad distribution; you should count on it, and construct even minor announcements in such a way as to encourage accurate informal transmission. Specifically, posts that you expect to be quoted should have a clearly meant-to-be-quoted portion, just as though you were writing a formal press release. For example:
Just a progress update: we're planning to release version 2.0 of Scanley in mid-August 2005. You can always check http://www.scanley.org/status.html for updates. The major new feature will be regular-expression searches.
Other new features include: ... There will also be various bugfixes, including: ...
The first paragraph is short, gives the two most important pieces of information (the release date and the major new feature), and a URL to visit for further news. If that paragraph is the only thing that crosses someone's screen, you're still doing pretty well. The rest of the mail could be lost without affecting the gist of the content. Of course, sometimes people will link to the entire mail anyway, but just as often, they'll quote only a small part. Given that the latter is a possibility, you might as well make it easy for them, and in the bargain get some influence over what gets quoted.
Handling a security vulnerability is different from handling any other kind of bug report. In free software, doing things openly and transparently is normally almost a religious credo. Every step of the standard bug-handling process is visible to all who care to watch: the arrival of the initial report, the ensuing discussion, and the eventual fix.
Security bugs are different. They can compromise users' data, and possibly users' entire computers. To discuss such a problem openly would be to advertise its existence to the entire world—including to all the parties who might make malicious use of the bug. Even merely committing a fix effectively announces the bug's existence (there are potential attackers who watch the commit logs of public projects, systematically looking for changes that indicate security problems in the pre-change code). Most open source projects have settled on approximately the same set of steps to handle this conflict between openness and secrecy, based on the these basic guidelines:
Don't talk about the bug publicly until a fix is available; then supply the fix at exactly the same moment you announce the bug.
Come up with that fix as fast as you can—especially if someone outside the project reported the bug, because then you know there's at least one person outside the project who is able to exploit the vulnerability.
In practice, those principles lead to a fairly standardized series of steps, which are described in the sections below.
Obviously, a project needs the ability to receive security bug reports from anyone. But the regular bug reporting address won't do, because it can be watched by anyone too. Therefore, have a separate mailing list for receiving security bug reports. That mailing list must not have publicly readable archives, and its subscribership must be strictly controlled—only long-time, trusted developers can be on the list. If you need a formal definition of "trusted", you can use "anyone who has had commit access for two years or more" or something like that, to avoid favoritism. This is the group that will handle security bugs.
Ideally, the security list should not be spam-protected or moderated, since you don't want an important report to get filtered out or delayed just because no moderators happened to be online that weekend. If you do use automated spam-protection software, try to configure it with high-tolerance settings; it's better to let a few spams through than to miss a report. For the list to be effective, you must advertise its address, of course; but given that it will be unmoderated and, at most, lightly spam-protected, try to never to post its address without some sort of address hiding transformation, as described in הסעיף שנקרא “Address hiding in archives” in פרק 3, Technical Infrastructure. Fortunately, address-hiding need not make the address illegible; see http://subversion.tigris.org/security.html, and view that page's HTML source, for an example.
So what does the security list do when it receives a report? The first task is to evaluate the problem's severity and urgency:
How serious is the vulnerability? Does it allow a malicious attacker to take over the computer of someone who uses your software? Or does it, say, merely leak information about the sizes of some of their files?
How easy is it to exploit the vulnerability? Can an attack be scripted, or does it require circumstantial knowledge, educated guessing, and luck?
Who reported the problem to
you? The answer to this question doesn't change the
nature of the vulnerability, of course, but it does give
you an idea of how many other people might know about it.
If the report comes from one of the project's own
developers, you can breathe a little easier (but only a
little), because you can trust them not to have told
anyone else about it. On the other hand, if it came in an
email from anonymous14@globalhackerz.net
,
then you'd better act as fast as you can. The person did
you a favor by informing you of the problem at all, but you
have no idea how many other people she's told, or how long
she'll wait before exploiting the vulnerability on live
installations.
Note that the difference we're talking about here is often just a narrow range between urgent and extremely urgent. Even when the report comes from a known, friendly source, there could be other people on the Net who discovered the bug long ago and just haven't reported it. The only time things aren't urgent is when the bug inherently does not compromise security very severely.
The "anonymous14@globalhackerz.net
" example
is not facetious, by the way. You really may get bug reports from
identity-cloaked people who, by their words and behavior, never quite
clarify whether they're on your side or not. It doesn't matter: if
they've reported the security hole to you, they'll feel they've done
you a good turn, and you should respond in kind. Thank them for the
report, give them a date on or before which you plan to release a
public fix, and keep them in the loop. Sometimes they may
give you a date—that is, an implicit threat
to publicize the bug on a certain date, whether you're ready or not.
This may feel like a bullying power play, but it's more likely a
preëmptive action resulting from past disappointment with
unresponsive software producers who didn't take security reports
seriously enough. Either way, you can't afford to tick this person
off. After all, if the bug is severe, he has knowledge that could
cause your users big problems. Treat such reporters well, and hope
that they treat you well.
Another frequent reporter of security bugs is the security professional, someone who audits code for a living and keeps up on the latest news of software vulnerabilities. These people usually have experience on both sides of the fence—they've both received and sent reports, probably more than most developers in your project have. They too will usually give a deadline for fixing a vulnerability before going public. The deadline may be somewhat negotiable, but that's up to the reporter; deadlines have become recognized among security professionals as pretty much the only reliable way to get organizations to address security problems promptly. So don't treat the deadline as rude; it's a time-honored tradition, and there are good reasons for it.
Once you know the severity and urgency, you can start working on a fix. There is sometimes a tradeoff between doing a fix elegantly and doing it speedily; this is why you must agree on the urgency before you start. Keep discussion of the fix restricted to the security list members, of course, plus the original reporter (if she wants to be involved) and any developers who need to be brought in for technical reasons.
Do not commit the fix to the repository. Keep it in patch form until the go-public date. If you were to commit it, even with an innocent-looking log message, someone might notice and understand the change. You never know who is watching your repository and why they might be interested. Turning off commit emails wouldn't help; first of all, the gap in the commit mail sequence would itself look suspicious, and anyway, the data would still be in the repository. Just do all development in a patch and keep the patch in some private place, perhaps a separate, private repository known only to the people already aware of the bug. (If you use a decentralized version control system like Arch or SVK, you can do the work under full version control, and just keep that repository inaccessible to outsiders.)
You may have seen a CAN number or a CVE number associated with security problems. These numbers usually look like "CAN-2004-0397" or "CVE-2002-0092", for example.
Both kinds of numbers represent the same type of entity: an entry in the list of "Common Vulnerabilities and Exposures" list maintained at http://cve.mitre.org/. The purpose of the list is to provide standardized names for all known security problems, so that everyone has a unique, canonical name to use when discussing one, and a central place to go to find out more information. The only difference between a "CAN" number and a "CVE" number is that the former represents a candidate entry, not yet approved for inclusion in the official list by the CVE Editorial Board, and the latter represents an approved entry. However, both types of entries are visible to the public, and an entry's number does not change when it is approved—the "CAN" prefix is simply replaced with "CVE".
A CAN/CVE entry does not itself contain a full description of the bug and how to protect against it. Instead, it contains a brief summary, and a list of references to external resources (such as mailing list archives) where people can go to get more detailed information. The real purpose of http://cve.mitre.org/ is to provide a well-organized space in which every vulnerability can have a name and a clear route to more data. See http://cve.mitre.org/cgi-bin/cvename.cgi?name=2002-0092 for an example of an entry. Note that the references can be very terse, with sources appearing as cryptic abbreviations. A key to those abbreviations is at http://cve.mitre.org/cve/refs/refkey.html.
If your vulnerability meets the CVE criteria, you may wish to acquire it a CAN number. The process for doing so is deliberately gated: basically, you have to know someone, or know someone who knows someone. This is not as crazy as it might sound. In order for the CVE Editorial Board to avoid being overwhelmed with spurious or poorly written submissions, they take submissions only from sources they already know and trust. In order to get your vulnerability listed, therefore, you need to find a path of acquaintance from your project to the CVE Editorial Board. Ask around among your developers; one of them will probably know someone else who has either done the CAN process before, or knows someone who has, etc. The advantage of doing it this way is also that somewhere along the chain, someone may know enough to tell you that a) it wouldn't count as a vulnerability or exposure according to MITRE's criteria, so there is no point submitting it, or b) the vulnerability already has a CAN or CVE number. The latter can happen if the bug has already been published on another security advisory list, for example at http://www.cert.org/ or on the BugTraq mailing list at http://www.securityfocus.com/. (If that happened without your project hearing about it, then you should worry what else might be going on that you don't know about.)
If you get a CAN/CVE number at all, you usually want to get it in the early stages of your bug investigation, so that all further communications can refer to that number. CAN entries are embargoed until the go-public date; the entry will exist as an empty placeholder (so you don't lose the name), but it won't reveal any information about the vulnerability until the date on which you will be announcing the bug and the fix.
More information about the CAN/CVE process may be found at http://cve.mitre.org/about/candidates.html, and a particularly clear exposition of one open source project's use of CAN/CVE numbers is at http://www.debian.org/security/cve-compatibility.
Once your security response team (that is, those developers who are on the security mailing list, or who have been brought in to deal with a particular report) has a fix ready, you need to decide how to distribute it.
If you simply commit the fix to your repository, or otherwise announce it to the world, you effectively force everyone using your software to upgrade immediately or risk being hacked. It is sometimes appropriate, therefore, to do pre-notification for certain important users. This is particularly true with client/server software, where there may be well-known servers that are tempting targets for attackers. Those servers' administrators would appreciate having an extra day or two to do the upgrade, so that they are already protected by the time the exploit becomes public knowledge.
Pre-notification simply means sending mails to those administrators before the go-public date, telling them of the vulnerability and how to fix it. You should send pre-notification only to people you trust to be discreet with the information. That is, the qualification for receiving pre-notification is twofold: the recipient must run a large, important server where a compromise would be a serious matter, and the recipient must be known to be someone who won't blab about the security problem before the go-public date.
Send each pre-notification mail individually (one at a time) to each recipient. Do not send to the entire list of recipients at once, because then they would see each others' names—meaning that you would essentially be alerting each recipient to the fact that each other recipient may have a security hole in her server. Sending it to them all via blind CC (BCC) isn't a good solution either, because some admins protect their inboxes with spam filters that either block or reduce the priority of BCC'd mail, since so much spam is sent via BCC these days.
Here's a sample pre-notification mail:
From: Your Name Here To: admin@large-famous-server.com Reply-to: Your Name Here (not the security list's address) Subject: Confidential Scanley vulnerability notification. This email is a confidential pre-notification of a security alert in the Scanley server. Please *do not forward* any part of this mail to anyone. The public announcement is not until May 19th, and we'd like to keep the information embargoed until then. You are receiving this mail because (we think) you run a Scanley server, and would want to have it patched before this security hole is made public on May 19th. References: =========== CAN-2004-1771: Scanley stack overflow in queries Vulnerability: ============== The server can be made to run arbitrary commands if the server's locale is misconfigured and the client sends a malformed query. Severity: ========= Very severe, can involve arbitrary code execution on the server. Workarounds: ============ Setting the 'natural-language-processing' option to 'off' in scanley.conf closes this vulnerability. Patch: ====== The patch below applies to Scanley 3.0, 3.1, and 3.2. A new public release (Scanley 3.2.1) will be made on or just before May 19th, so that it is available at the same time as this vulnerability is made public. You can patch now, or just wait for the public release. The only difference between 3.2 and 3.2.1 will be this patch. [...patch goes here...]
If you have a CAN number, include it in the pre-notification (as shown above), even though the information is still embargoed and therefore the MITRE page will show nothing. Including the CAN number allows the recipient to know with certainty that the bug they were pre-notified about is the same one they later hear about through public channels, so they don't have to worry whether further action is necessary or not, which is precisely the point of CAN/CVE numbers.
The last step in handling a security bug is to distribute the fix publicly. In a single, comprehensive announcement, you should describe the problem, give the CAN/CVE number if any, describe how to work around it, and how to permanently fix it. Usually "fix" means upgrading to a new version of the software, though sometimes it can mean applying a patch, particularly if the software is normally run in source form anyway. If you do make a new release, it should differ from some existing release by exactly the security patch. That way, conservative admins can upgrade without worrying about what else they might be affecting; they also don't have to worry about future upgrades, because the security fix will be in all future releases as a matter of course. (Details of release procedures are discussed in הסעיף שנקרא “Security Releases” in פרק 7, Packaging, Releasing, and Daily Development.)
Whether or not the public fix involves a new release, do the
announcement with roughly the same priority as you would a new
release: send a mail to the project's announce
list, make a new press release, update the Freshmeat entry, etc.
While you should never try to play down the existence of a security
bug out of concern for the project's reputation, you may certainly set
the tone and prominence of a security announcement to match the actual
severity of the problem. If the security hole is just a minor
information exposure, not an exploit that allows the user's entire
computer to be taken over, then it may not warrant a lot of fuss. You
may even decide not to distract the announce
list
with it. After all, if the project cries wolf every time, users might
end up thinking the software is less secure than it actually is, and
also might not believe you when you have a really big problem to
announce. See
http://cve.mitre.org/about/terminology.html for a good
introduction to the problem of judging severity.
In general, if you're unsure how to treat a security problem, find someone with experience and talk to them about it. Assessing and handling vulnerabilities is very much an acquired skill, and it's easy to make missteps the first few times.