Member Since: May 27, 2011
Country: United States
I'd love to see more random interesting demo projects that don't necessarily have any specific purpose, which can ultimately serve as the foundation for something else. I found that learning by example was by far one of the most enjoyable, and quickest ways to pick something up, and having a short/concise tutorial attached can help for when you get stuck.
As a matter of fact, I just finished setting up our roof antenna! We were going to go with 2 AF5 (AirFiber) units, however those were significant overkill, so I went for the more economical NanoBeam M5s. The viewsheds from the roof were dubious at best (10% 1st Fresnel zone clearance, and about a 5' window between buildings). So this year, instead of a 3' tripod, I opted for something a little more substantial -- It's not particularly well done, but it's only going to be up for a few days.
Unfortunately, you can't without having a copy of the old certificate to check against the revocation list. In my findings, it was actually extremely difficult, and usually impossible to get this exploit to disclose any key material unless exactly the right circumstances existed. More than anything, what was at risk is the encrypted data that was being sent to and received from servers.
Over the last two years, it's somewhat unlikely that this was actively being exploited, but just about anything you've done in the last 3 days anywhere on the internet should be considered entirely compromised. If you've had any active sessions on any sites, you should logout (someone can 'assume' your session to get into your account), and change any passwords you may have used in the last week. This includes major sites such as Facebook, GitHub, Indiegogo, etc, although I know most large organizations are aware of this threat, and have manually reset all active sessions in order to mitigate it.
This is normal - our certificate was reissued, and the old one revoked, which means the issue date and expiry date will stay the same.
Edit: We normally utilize what's known as an EV certificate (extended validation), however the reissue for it takes several days to several weeks, so we've fallen back to a traditional, non-EV one. Security wise, they're the same, but EV will give you the green bar that you see on some sites with the organization's name. Once Comodo grants our reissue and revocation request, we'll replace our certs once more.
Last night I wanted to see just how serious this was in the wild, and I can confirm with absolute certainty that it is the worst I've ever seen. It's extremely trivial for an attacker to pick up your credentials from POST data off the heap from nginx/apache, and I'd run under the assumption that there was widespread datamining happening hours, if not minutes after the CVE was released.
To get more specific about the issue, we learned that in postgres, an "Access Exclusive Lock" on a table will block all future "Access Share Lock" requests, even when the exclusive lock has not yet been granted and there's an existing Share Lock with long-running queries executing. We're looking forward to PostgreSQL 9.4, where we'll be able to utilize a new feature known as concurrent matview refreshes.
We use a trigger to update our materialized view whenever a product quantity needs to change. Unfortunately, this could lead to the following scenario:
** Someone places / picks an order, which wants to update the quantity on 10 different products **
We have a long query of some kind running, so when product #1 triggers a matview refresh, it requests an Access Exclusive Lock on the materialized view, but it's not yet granted
Some time passes, and another random long-running query is enqueued after the matview refresh, along with hundreds of other normal queries (product views, logins, etc). These are all blocked from executing due to the Access Exclusive Lock being present, even though it is not granted
Long query #1 finishes, releases its share lock (~5 sec)
Matview refresh is granted the exclusive lock, is executed, and releases it in a timely manner (~150ms)
Long query #2 starts, joins/creates a share lock
In the meantime, our php backend fires off the stock change for product #2
Rinse, repeat, timeout =(
One update could cause future queries to hang for seconds, if not more. Stack multiple product quantity updates together from different sources, and we'd have periods of mass timeouts.
Casey was able to optimize one of the long running queries to prevent the above from causing timeouts, however we're reworking things and optimizing our postgres locks to prevent this from ever happening in the future.
Thanks for the report, we're tracking down this issue now and should have a fix in place soon
Sorry MathMyfanwy! Unfortunately that error was seen by a few who were in the checkout process after it had already sold out. The entire stock only lasted 2m10s, so the window to get your order in was rather small.
Thanks for the report! We've fixed this in our dev environment, however it won't roll out until tomorrow morning.
Yarr?