Tuesday, March 30, 2004

visitedEurope - World66

This is my Route66 map of visited countries in Europe. Generating a world map would be pretty useless for me, as I have never been outside of Europe :)

Visit visitedEurope - World66 to generate your own map

Sunday, March 28, 2004

Wired missing the point?

Wired runs an article about plans in the US to issue trusted travelers a card granting them the possibility to skip time-consuming checks at the airport. The card seems to assure we're not dealing with a terrorist planning to crash into the Whitehouse here.
By the way, it seems the US government seems to think someone buying a one-way ticket is more likely to be a plane hijacker. Which raises the interesting question: why in the world would a professional terrorist buy a one-way ticket in preparing himself for paradise? Certainly after reading the Wired article containing this sensitive piece of governmental information he will think twice about doing so (pun intended). Why save a couple of bucks after all those years of living as a mole in complete anonymity, taking care of not being given even getting fined for speeding, and then doing something as obviously stupid as that?
Well, back to the point: the government wants to reduce chances of terrorist attack taking place. Doing everything to prevent terrorists from acting out their evil deeds is a very good thing, don't get me wrong. But we should hope that the measures that are being taken do really help something in the process. For example, what we really wouldn't want is the introduction of a card for the rich and influential to be treated quicker than us mortals; under the coverage of restraining terrorists in their footsteps. Let's take a look at what Bruce Schneier, a well-known and respected voice on these kind of matters, has to say about all this. In a lengthy contribution to his latest monthly Crypto-Gram newsletter (which seems to be read by 100,000+ readers), he writes about some entrepreneur trying to have his invention V-ID system introduces. I don't know if Wired and Schneier are talking about the same system here, but he idea of the cards is the same:
"for people who are not on a set of government watch lists to be able to subscribe to the service (or for organizations to buy it for their employees, customers, etc.), and then get faster treatment at security checkpoints around the country"
(quote from crypto-gram).
The card is a voluntary national ID card, for every American without a criminal record to acquire. Schneider’s idea is that somewhere in the system computers make decisions about card-issuing. Attackers will probe the system for vulnerabilities. And while it won’t be easy for ordinary people, and maybe even for ordinary CS graduates, we really can’t be sure it will be safe from a "dedicated and well-funded adversary" (quote). Would it be possible for terrorists to acquire such card, the whole idea behind it is thwarted; it will even obviously lowers security, as now potentially dangerous people are declared trustworthy by the government itself.
Let me know if you don’t agree, think these kind of cards are in fact a very good idea; I would be glad to alter my opinion seeing some reasonable arguments here. It's just that I'm quite concerned with the way governments (also mine, the Dutch) are using the threat of terror as an excuse to introduce new laws certainly having the side effect of diminishing our freedom of movement and privacy. It seems all kinds of measures are taken without further thinking; only when it's too late do we see what has been taken from us.

Friday, March 26, 2004

Plaxo vunnerability

I have reservations about online contact systems like Plaxo, which
I wrote about back in January. Now, it seems they had a major phishing hole on their homepage. You can never be too careful in putting your whole contact list on the net for free in the hands of a trusted third party. In the end there's no such things as a free... you know.

via Plaxo plugs phishing vulnerability - ZDNet UK News

Thursday, March 25, 2004

Redesign of a Secure Website

I'm in the process of redeveloping my company's secure website. This website allows our clients to access confidential data in a secure way. I thought to share it with the world. Maybe it's all old news for you, but hopefully I can contribute just that little bit extra to your understanding of building secure websites. [DISCLAIMER: I'm not working for one of the big companies, so maybe some things here are not exactly according to widely used standards. But hey, as we're just a very> small IT department, everything here has to be done in a DIY-kind-of-way. Not some tester around here, who'll come up with all kind of smart things I didn't consider. No, my complete knowledge stems from my own desire to gain knowledge. And when things seem to be seriously wrong here, please don’t hesitate to use the comments. I'll be glad to discuss matters a bit further. Well, let's take of then:

Convenient Access
These data consists of multiple types of information: automatically generated but static HTML pages, Word / PDF documents, and dynamic pages pulling data right from our Management Information System in real-time. Of course, access must be secure on one side and convenient for both administrators and clients at the same time. We had some thought as to the best way to do all this, so even direct links to Word documents would yield an Unauthorized error, unless the right credentials are given. Right now we use:
  • HTTPS-only access on all pages, to ensure data is encrypted in transit. 128-bit encryption is required, as most browsers support this nowadays; and we didn't have any complaints about it so far.

  • As to the client, the usual username / password method of authentication is used. Every clients has her own Windows Local User account

  • All pages checks for an authenticated user, and whether this person is the right user? Would you forget the last, it would be possible for one authenticated client to impersonate another one. If the user is not authenticated, or she is authenticated but not working under the expected credentials, she is logged off and redirected to the company homepage

As you see only a server certificate is used, as we're mostly concerned that data will not be sniffed along the way. It's too restraining for the clients to also have to cough up a client certificate. The use of client certificates doesn't seem to be in wide use nowadays, so I don't think this is a major flaw in our system. You could argue about that, but we're not talking military grade security here (note the irony in the term military :)

Getting Rid of Windows Accounts
One of the disadvantages of the current system is; having to add a new Windows user account for every new client takes some administrative hurdles. Lacking remote access to the webserver, we need physical access to it for every new client that has to be added, or passwords which have to be changes on request. We'd like to be able to use the standard Forms Based Authentication (FBA), with the usernames stored in a database table, and passwords in hashed form. We plan to include the option that clients can change their passwords as they wish; and they're definitely forced to change them at their first login. I guess hashed passwords require a client must be able to reset their own passwords anyway, as administrators will not have the possibility anymore to change them. Using a table for storing user login information will make administration as easy as adding a record to a table. Additional functionality should be the system generates a random password, which is then communicated to a pre-set email address (of the client, presumably).This will probably be done by means of an ordinary email (meaning a password is sent out unencrypted, but I see this system used even in major forum based sites; and chances that an informed [1] attacker can take hold of this specific email are quite small, I'd say. In that case they probably 0wn the network already to some amount, so they can take hold of much more information by then.

Reducing Data Redundancy
Other things I want to accomplish is reducing data redundancy while we're at it. Right now, we upload copies of all kinds of documents which are used on the local intranet, to the webserver. Every now and then these documents are updated. You'll have to take very good care that webserver documents stay in sync. One way would be some nightly super batch script which copies all stuff to the webserver, but it's kind of rough. I want to put everything into a database, and have both internal and external customers use that one. For internal users, there's no access restrictions necessary. However, external clients must be restricted to seeing only information for which they are cleared. This can be complete sets of documents, or only specific ones, anything. A smart way of handling this requirement is on its way.

SQL Injection ahead!
Abandoning Windows Authentication, and using FBA instead, means SQL Injection comes into the picture even more than now. You know about SQL Injection
don't you? By not checking user input in some way, you risk people can access confidential parts of your site with no password at all. Or worse; they're able to enumerate all kind of information about your database, database server etcetera. They could even bring your server down, drop your database or execute commands on your database server by using xp_cmdshell. And, actually Windows Authentication is kind of easy, come to think of it. Authentication is performed by the Operating System, the user is given an server variable USER, which is trivial to check for. With table based login details you'll have to do everything yourself, setting and bookkeeping Session / Cookie variables and what have you. However, a big advantage of the future system will be the possibility to really log a user off from the system. Windows Authentication requires the user to close the browser after use, even Outlook Web Access does this (have you ever tried to log off, and go back to your inbox: you'll still be able to take care of your email)

Another thing we're not clear on: should we use the SessionID in the QueryString. A method you'll find on lots of sites, e.g. web based email clients, shopping sites and the like. You can use it to double check whether the user is allowed access to the page: don't have the expected SessionID: you're instantly logged off, taken to disneyworld.com or whatever. Actually, I've learned the mantra "QueryStrings are bad", but they can be used to your advantage when handled the right way. However, I'm considering no using them, as Cookies / Session variables can be handled without resorting to QueryString, of course.

[1] With informed I mean the attacker has knowledge about the system and wants to gain access. In the process he tries to sniff the email containing a password. As these emails are only sent once to new clients, and afterwards only on clients' request, this will happen in such low frequency that chances of intercepting for an attacker are quite low. He might have to monitor all network traffic maybe for several days, and still not know when this will happen. Maybe it's more feasible than I think, but still I don't think it's the smartest way of finding out a password. Too much trouble, and chances are the client will soon find out the password doesn't work, contact us, and we'll just reset it. There's no elevation of privileges possible with just this attack; the worst thing that can happen is that the attacker is able to view some confidential data for a certain amount of time.

Monday, March 22, 2004

[QUICK TIP] Google search even more powerful

You can search on Google on words which are located near each other, with some in between. I didn't know this; it may be very old news, but not for me.It'll certainly exctend my search capabilities

I was a bit too fast into thinking the asterisk was a replacement for the NEAR search term. It seems it has the meaning of 'one word', so two asterisks represent two words in between two search terms. This mean I could search for all March 2004 articles on this blog with the command site:sikko2go.blogspot.com "March * 2004". Although I see some articles -including this one!-, not all are there. I assume Google must have indexed all of them, as they already indexed this one which I posted yesterday; so that leaves room to some discussion as to function of the asterisk. Btw, I couldn't find any info about this on Google's site itself

Monday, March 15, 2004

Jiri's Notepad: Cash machine and train reboots

Jiri's Notepad: Cash machine and train reboots

in which Jiri has a friend who was in a train which just had to reboot in order to solve some technical difficulties. Ha-ha, but what if I were on that train. What if this train had to reboot at full speed in order to be using the brakes again. Like switching the engine of my car on the highway. Obviously something you just do not do. And probably there's protection against the switching off of cars while in transit. Although I don't know, never tried...

Sunday, March 14, 2004


The Fishbowl has a piece on YAGNI, an acronym for "You Aren't Going to Need It", and one of the eXtreme Programming principles. This article really made my day, because my peers tend to tell me I definitely need to make things as generic as possible. While this doesn't need to be bad in general, I sometimes find myself working for hours to make something useable for all possible future cases one can think of (at the present time!), while a much easier specific solution might be at hand. One you can only use in this circumstance, but also one which you can code in 10 minutes. The next time you might need this specific solution in another circumstance; it will take only the 10 minutes it took you last time. Probably less for reasons of increased experience. Isn't the point of this whole thing that you need to be aware of general picture, instead of making virtually all things generic? Let's take, e.g., database access. You could build only one wrapper for all web based database access you'll ever need in every future project. It will take a substantial amount of time, but then again you'll never find yourself troubling with connection strings etcetera.
Only thing: software engineering principles tend to keep on develop over time. What's in common use today will be outdated tomorrow, and you bet in two years. Secure database access best practices will certainly change vastly. And what to think of new technologies. While at my company we're still on IIS 5.0 and ASP, and .NET being something we just look in to as something for the future. Suppose I'd put an abundance of time in building the most secure db access wrapper. It could be superfluous the moment we change to .NET.

(by the way, my colleague really had a good laugh last week installing VS.NET for the first time. Well, I told him - as I read it somewhere, don't remember - you surely need to write all your code from the ground up, cause this time there's no way to have your VS6 code converted to .NET. The next thing he does is opening up his favorite VB6 project in . NET, which launches some Wizard who exactly want to do just that, converting his project to .NET :)

Anyway, the YAGNI piece is an excellent example of my own opinion about software development. And , working my way through the comments I seem to be more of into eXtreme Programming than I'd thought.

Btw, Google search on YAGNI yields a lot of results, of which the specific article is already second, just below some XP Wiki site. Sounds cool, for something written just days ago!

Disappearing Internet Explorer Links Toolbar

Why is it hat every time - well, quite often in fact - my Internet Explorer Links Toolbar, on which I keep my most visited sites, keeps on disappearing. I put everything in position, lock the bars, and still the bar isn't visible a lot of times. But hey, wait, can it be that when I'm doing offline reading they're automatically not visible or something. OK, I'm offline right now so there's no way to prove this. Some research needs to be done here. I like learning new things, so that won't be a problem. I'll let you know the answer

Wednesday, March 10, 2004

The World got itself some more Data Entry Screens

Last week I built some dozen and a half new Data Entry Screens for use within my company's homegrown Access database. We use it as a front end to our SQL Server. The whole system has a couple of features like double data-entry with a built-in Data Compare. The results of the compare will be visible to the user in electronic of paper form, and can then be looked up in the original paper forms for adjustment. In the end of course, the data in both datasets will be the same, and the data is declared 'clean'. Other features are an Audit Trail mercilessly logging al updates made to once entered data. The user has to give a reason for making the change. Trail data is also available to the users. We work with roles: some people have a limited ability to alter data or enter some new system-wide information, other are only allowed to enter data and nothing else. All kinds of reports can be automatically printed by the user. As a matter of fact, the whole system is self supporting for 95% of the time. Only some key information is to be added and updated by us, the administrators when necessary.

But before my new Data Entry Forms can be used in the system, some validation needs to be done. First of all, I open up the forms myself and enter some random data. This is to find any typos in field names or something. Most of the Form is built by hand; well, some back-end code is generated automatically, but the fields have to be placed on the Form by hand, of recycled from some old Form. One can understand that errors on my behalf are possible. Well, after the Forms fire up nice and cleanly, it's time to make the dreaded Test Protocol. This is a paper manuscript, which a Data Entry Clerk must follow closely to get some test data into the screens. The test data consists of some test patients (patients are the main entity in this -medical- database. One test patient’s data is used later for testing of the export to the statistical package; another has data which is completely different in both data entry runs (we have double data entry, remember?). This is done logically to test the Compare function: all these different fields have to be coughed up by it after being keyed into the system. Another patient will have completely similar data in both runs: the expected result from the Compare will be obvious to the considerate reader. Other test patients are used for other system functions (for example, because a lot of data can be missing, unreadable etc, we use a range of standard codes to address the issues; for these 'filter codes', but it's a bit too much detail for now. Let's consider the Test Protocol done for now.

Be Careful
Other things can go wrong, for example me forgetting to put the appropriate user rights on a newly created SQL Server table. Funny thing is, this will become obvious quite soon because we build these new tables into the production system. Now it's possible some database function which is used by one of the regular users of the system, and which needs to check all tables of some kind in the database, that it gets stuck on this table (mmm... just let me think on rewriting this sentence a bit). The error the user gets in his / her screen -which is obvious to me and can be fixed in seconds- will get my attention to details back quite quickly, as this should rather not happen :-)

Now I hear you guys rambling: for goodness sake, why do these morons work on the production server while building new screens into their live systems ?!? Let me tell you, as a two-man IT Department of a small company, it's just not an option to have a dedicated server at our disposal for testing purposes. It would be a case of too much money on one side, and too much administrative hassle on the other. Let me tell you, I've seen examples far worse than our situation. For the size of company we have, we're doing quite well. And it actually works very smooth. SQL Server has its nightly database backups and hourly backups of the Transaction Log. Tape backups are performed at night. So in case of a catastrophe we lose at most one day of work (the catastrophe happening at 6pm and taking the tape streamer in its decline, as everybody will. And in case one of us screws up with the database, at least we'll lose no more than an hour of data entry. Which I can live with. And o yeah: we do check our tape backups occasionally, thank you very much

Go and Test 'em
Were was I.. Ah, I see we just finished the Test Protocol. OK, now some basic printing is done (at least our power printer / copier does holes and staples itself, can print double-sided with no trouble etc) on some packs of paper. Don't forget to insert the Access printouts of some Form Security checks, before making some additional copies, or you have to insert copies into copies by hand. We have different kinds of form security in the sense that sometimes it must be necessary to enter new information on the same screens, while on other occasions (read: screens) this must explicitly no be the case. Most of the times this works reasonably well. The only problems raises when one wants to use an old template screen in a new situation: suppose in the new situation it must be able to add new information, while the old template doesn't allow. A decision must be made to make a new copy of this screen, or adapt the old one. In the former case the whole test phase must be walked through again. Which costs me probably about 3 hours divided over multiple days during the test phase. And other people it will take some additional hours to enter data, performs checks etcetera. It is feasible that sometimes a trade-off will be made between the additional hours of work that have to be put in and the ease of just tweaking the old template a little bit.
Now most of the work for me is done. Only, chances are some Data Entry person will come back to me failing to try and enter some test data. After some little fixes being made, everything is supposed to work as a charm. I expect the Screens I built last week will be available to the system next week. Don't know if that's a good score compared to the way that is worked in bigger companies and / or IT Departments, but that’s for you to let me know in the comments.

For now I consider this the end of the article. Just wanted to let you know a bit about life on a small IT Department. It probably is completely different from your daily working environment. Hope you found it interesting reading, and if you'd like to hear some more stories , tell me what you want to know.... until next time

Weird Word thingy

Fired up 3 Word documents in which I copied some HTML pages I wanted to read offline. As they couldn't be saved in the normal way, I used my normal alternative: paste to Word. Sometimes Word just freezes as it seems to choke on an overload of HTML code. But now I have 3 happy Word documents, accessible as ever with the mouse, but all Word functionality is disabled. Basically I can scroll up and down the docs, but nothing else. Which seems a little odd. Lucky man I am however, as I still can read the information, so my goal is reached at least

[NOTE TO SELF]Don't post this! There was this stupid Dialog Box open somewhere underneath all windows, which just didn't come up[/NOTE TO SELF]

Well, I might just post just as well. After all, how many times do you have these kind of things happening to you .... massive quietness... it happens to me every now and then. Maybe I'm just stupid, but my opinion is that this is also a usability error. Somehow you should be notified about an open dialog box. Maybe it must remain as an MDI form on top of the specific Window, in which case it's not possible to do anything else before closing it. You asked for it to be opened, now tell it to close also... I just repeated my specific problem. Open 2 Word documents, open up the thesaurus on one document, and then switch windows a bit. The thesaurus will fall into the background, rendering both the Word documents useless. Now, closing one of the documents yields the following error "you cannot close Microsoft Word because a dialog is active. Switch to Microsoft Word first and close the dialog" after which the thesaurus finally pops up again... a little late. Why couldn't it just stay MDI, I want to look up the meaning of a Word, and not something else. Anybody has a thought on this matter?

Friday, March 05, 2004

LINK: DotNetRocks

Radio show on .NET programming. You can listen to the show live as-it-happens, and ask questions during the show. Alternativelly, download all former shows in mp3 or windows media, or listen to a windows stream (in which case you don't need to download the complete show first)


Just wanted to let you know that Bloglines is indeed very OK. For those not in the know: it's an online news aggregator. Well, some weeks ago I told you why I didn't use news aggregators, but that was before I discovered Bloglines. What I especially like is the ease with which it works: an easy way to get up to speed with you blogroll is to use 'recommended blogs', where I found a lot of the ones I already read. And adding new feeds manually is a breeze. The best part however, is that it's online, on the web, and it remembers which blogs I read -on a blog-by-blog basis-, how many new items there are, etc etc. This way, I can use it anywhere I like, at home, at the office, and still never read an old article again. I don't even see old entries again, because there hidden from view, once you've opened the specific feed. Of course, you can recover old items from the feed when needed...
(disclaimer: I'm not funded by Bloglines in any way for writing this entry, I just like the service)