I just tweaked my online banking configuration, setting up bill payment from a different bank account, changing some settings on some credit cards, and so forth. In so doing, I had to both set up new "security questions", and to answer some I had set up in the past.
These "security questions" are a result of a US banking regulation mandate that online banking use "Two Factor Authentication". "Two Factor Auth" means, in theory, that auth be done on the basis of "something you know", which means "password", and "something you have", which means something like a RSA SecurID or Versign VIP, or the end point of a second comm channel, like say, a SMS cellphone.
The banking industry, being fools, knaves, and villains, decided that issuing, or even selling, most everyone, a security token "was too expensive and confusing", and so instead complained, lied, and did the usual regulatory capture dance, and managed to convince the banking industry regulators (see "fools, knaves, and villains", above), that knowing the answer to a "security question" counts as a "second factor".
!
Now, maybe it's true that for a significant fraction of the banks' clients, using a RSA token is, in fact, maybe too "confusing". But for those of us with a clue, please give us the option! Let me buy one from a list approved varieties/branks of security tokens for a couple of bucks, register it with each of my banks, credit cards, and other "secure" sites, and then have the option to use it.
Its not even really necessary to have to buy something. It can be a little app that runs in a smartphone, or even just the ability to receive a SMS message on a not-so-smart phone.
To cut the banking industry a bit of slack, I suspect part of the issue was that Verisign/RSA decided the regulation to be a license to rape the banking industry even harder, and the industry rebelled against them.
These "security questions" are a result of a US banking regulation mandate that online banking use "Two Factor Authentication". "Two Factor Auth" means, in theory, that auth be done on the basis of "something you know", which means "password", and "something you have", which means something like a RSA SecurID or Versign VIP, or the end point of a second comm channel, like say, a SMS cellphone.
The banking industry, being fools, knaves, and villains, decided that issuing, or even selling, most everyone, a security token "was too expensive and confusing", and so instead complained, lied, and did the usual regulatory capture dance, and managed to convince the banking industry regulators (see "fools, knaves, and villains", above), that knowing the answer to a "security question" counts as a "second factor".
!
Now, maybe it's true that for a significant fraction of the banks' clients, using a RSA token is, in fact, maybe too "confusing". But for those of us with a clue, please give us the option! Let me buy one from a list approved varieties/branks of security tokens for a couple of bucks, register it with each of my banks, credit cards, and other "secure" sites, and then have the option to use it.
Its not even really necessary to have to buy something. It can be a little app that runs in a smartphone, or even just the ability to receive a SMS message on a not-so-smart phone.
To cut the banking industry a bit of slack, I suspect part of the issue was that Verisign/RSA decided the regulation to be a license to rape the banking industry even harder, and the industry rebelled against them.
I just installed the Sun JRE RPM. I can't help but notice that it installed into /usr/java/jre*/lib/zi/ a nearly complete timezone database in it's own java-ish format.
All modern UNIX machines already have a completely complete timezone database, called the Olson Zoneinfo database, usually kept in /usr/share/zoneinfo/. It's kept exactingly complete and correct by an experienced and committed volunteer effort. The file format is tight, compact, and very easy to parse.
There is no excuse for Sun to clutter up my filesystem with a stale duplicate, and I'm willing to bet that their zi database is derived from an older version of the zoneinfo db.
They don't even need to change their API or any existing programs. I presume they were smart enough that the tz data is accessed thru some sort of abstract class interface. Just have that interface read and parse the zoneinfo db as needed, instead of their own db.
This rant was primed by my noticing that there is a CPAN Perl module that commits the same sin. DateTime::TimeZone contains a predigested and Perl-ized stale copy of the Olson database.
There is no reason for this! DateTime::TimeZone should access and parse the Zoneinfo db on demand, as it's used. There is no reason to waste space on millions of machines, in every CPAN replica, and on the bandwidth between CPAN and usermachines storing and hauling that data around, when it's already locally present.
Do Python and Ruby commit this same sin?
All modern UNIX machines already have a completely complete timezone database, called the Olson Zoneinfo database, usually kept in /usr/share/zoneinfo/. It's kept exactingly complete and correct by an experienced and committed volunteer effort. The file format is tight, compact, and very easy to parse.
There is no excuse for Sun to clutter up my filesystem with a stale duplicate, and I'm willing to bet that their zi database is derived from an older version of the zoneinfo db.
They don't even need to change their API or any existing programs. I presume they were smart enough that the tz data is accessed thru some sort of abstract class interface. Just have that interface read and parse the zoneinfo db as needed, instead of their own db.
This rant was primed by my noticing that there is a CPAN Perl module that commits the same sin. DateTime::TimeZone contains a predigested and Perl-ized stale copy of the Olson database.
There is no reason for this! DateTime::TimeZone should access and parse the Zoneinfo db on demand, as it's used. There is no reason to waste space on millions of machines, in every CPAN replica, and on the bandwidth between CPAN and usermachines storing and hauling that data around, when it's already locally present.
Do Python and Ruby commit this same sin?
Every time I upgrade a couple of Perl modules in the CPAN shell, it ends up sucking in another dozen new modules. What is "Purple" (ah, a Perl interface to libpurple. (And when and why did that get installed?)). And why is it now sucking in "DBD::SQLite" and "HTTP::Server::Simple"? And now "DBD::SQLite" is forcing the load of "DBI"...
Eventually the Perl community might as well admit it up front, if you do anything, you're going to load a majority of CPAN into your machine. They might as well just make the perl binary a bittorrent node for CPAN...
And so many of these modules fail their test suite, and emit basic warnings as complaints about using uninitialized variables. I'm glad that people are writing tests, I just wish they would make their code pass those tests! All that downloading and time spent running tests, tests failed, modules not loaded, wasted time, wasted transfer.
Eventually the Perl community might as well admit it up front, if you do anything, you're going to load a majority of CPAN into your machine. They might as well just make the perl binary a bittorrent node for CPAN...
And so many of these modules fail their test suite, and emit basic warnings as complaints about using uninitialized variables. I'm glad that people are writing tests, I just wish they would make their code pass those tests! All that downloading and time spent running tests, tests failed, modules not loaded, wasted time, wasted transfer.
Dear Coworker
Jan. 5th, 2006 10:53 amDear coworker,
When reorganizing the layout of the directories in our source tree, copying the directories to their new locations, then running p4 delete down the old locations and p4 add down the new locations causes me to want to come over your your seat, and take an axe to your desk and your computer.
Did you consider that maybe some of us actually have a use for the file revision log, mainly for when we want to integrate our fixes into a deploying tree, or when some PHB wants to know about the past history of some change or some bugfix?
Now, I understand that the botch known as CVS cannot handle file and directory moves properly. However, we are not using CVS. We are using Perforce. We spent a lot of money, per seat, so we could use Perforce.
Perforce does, in fact, know how to move files. There is, in fact, a section of the manual on that very subject, with a clear and easy to follow receipe.
Perforce and it's change tracker is tied closely with our Bugzilla and with our auto build system and with our talks-to-managers report generator. All of which kinda depend on you not breaking the change tracking.
Thank you for your time and attention in this manner.
Your humble coworker.
When reorganizing the layout of the directories in our source tree, copying the directories to their new locations, then running p4 delete down the old locations and p4 add down the new locations causes me to want to come over your your seat, and take an axe to your desk and your computer.
Did you consider that maybe some of us actually have a use for the file revision log, mainly for when we want to integrate our fixes into a deploying tree, or when some PHB wants to know about the past history of some change or some bugfix?
Now, I understand that the botch known as CVS cannot handle file and directory moves properly. However, we are not using CVS. We are using Perforce. We spent a lot of money, per seat, so we could use Perforce.
Perforce does, in fact, know how to move files. There is, in fact, a section of the manual on that very subject, with a clear and easy to follow receipe.
Perforce and it's change tracker is tied closely with our Bugzilla and with our auto build system and with our talks-to-managers report generator. All of which kinda depend on you not breaking the change tracking.
Thank you for your time and attention in this manner.
Your humble coworker.