Tuesday, June 26, 2007

Can't find gacutil in .Net 2.0

The .Net Redistributable no longer contains Gacutil.exe, so if you need it somewhere that you can't install the SDK then just copy the gacutil.exe and gacutil.exe.config files to it.

If this sounds familiar I have blogged about it before, but looking through my Google Analytics keywords people are searching for this topic but not spending enough time on the page to read the answer (unless people can read that post in less than 2 seconds).

I know I tend to babble though (this is a blog after all, and not technical documentation), so hopefully this quick post with the information right at the top will help those people in a rush and need the information in bold type.

Friday, June 22, 2007

Figuring Out TransactionScope

One of the things that came out of Tech Ed was TransactionScope (and the other things in the System.Transactions assembly) for distributed transactions. Prior to TransactionScope the only way of having a transaction span remote SQL connections was to use the System.EnterpriseServices pattern: derive a class from ServicedComponent, mark it with Transaction and AutoComplete attibutes, install the component into Component Services and you're done!

This is a pattern I'm very familiar with, but I don't need to tell those of you that use it how much trouble it quickly becomes. But for those of you that don't, here's a few of my pet peeves:

  • During development a COM+ Server application will run outside of the debugger, so you have to explicitly tell Visual Studio to attach to the dllhost.exe process if you want to debug the code within it.
  • The ASP.Net user that web applications run under don't have the permissions to lazy load the component as it's needed, so you have to either run impersonated as another user, or add a post build event into your project to manually call into Regsvcs every time your assembly changes.
  • Deployment is difficult as the COM+ components have to be installed (which seems to take longer on each install)
  • Tests using the components are slow as they have to be registered before they are run
  • Transactions are declared at a class level, which means if you want to go from no transaction to required, to requires new, you have to create 3 classes and figure out a way of chaining them together in the way you want.
In short, COM+ is a pain that I've had ill feelings towards for a while, but there seemed no alternative for what we need.

Until now that is.

Using TransactionScope certainly seems as simple as is documented, and it does have a performance increase over COM+ (from the limited tests that I've done). The transaction management seems faster and we're not calling into another process to do the work like I think we do with COM+.

A couple of things I found though, that do seem quite reasonable:
  • The TransactionScope needs to be instantiated before the object that will be controlling your transaction, so you need to create a scope before you create a SqlConnection
  • You don't seem to be able to enlist in a new SqlTransaction inside of a required scope and have that SqlTransaction commit separately from the ambient transaction. However, you can create a new connection inside a new scope declared with a Suppress TransactionScopeOption
Which do seem quite reasonable, if you understand that they will happen. Although, I'm not convinced I've fully explored the options available to me for the second point. If anyone knows if this is actually the case then please let me know.

But for now it looks like TransactionScope executes faster than using COM+, and it looks to be easier to develop as well.

Wednesday, June 20, 2007

Scalable Apps with Asynchronous Programming in ASP.NET

Just opened VS.Net and noticed on the start page a link to the article Scalable Apps with Asynchronous Programming in ASP.NET. The title sounded familiar, so I took a look and it's by Jeff Prosise, who gave a talk on the same subject at Tech Ed.

Take a look at the article if you weren't able to make Tech Ed this year, or if you can't understand my notes from the session in my previous post :).

Friday, June 15, 2007

Where I went on Tuesday in Orlando

If you read the blogs I wrote while in Orlando last week you'll know about how I got lost on the way back from the Mall. Well, I've since used Google Earth to find out where I went.

I've created a file you can open with Google Earth that will list a number of bookmarks that mark the steps along that journey. The file will open into your sidebar, and you just have to double click each step to be taken to it. Single click one of the hyperlinks to see the the full description for the step.

Download the file here

Download Google Earth here (if you haven't already got it)

Friday, June 08, 2007

Tech Ed Day 5 - Scalable ASP.NET Web Applications

I only have chance to attend one session here on the final day in Orlando. Luckily it proved highly valuable, and contained plenty of information.

In Building Highly Scalable ASP.NET Web Sites by Exploiting Asynchronous Programming Models, Jeff Prosise from Wintellect explained how to make your pages work under heavy loads.

Under the hood ASP.NET has two thread pools, one for handling I/O, and the other for worker threads. The worker threads are what ASP.NET assigns a request to, and is where all the work is done in the application. There is a set number of these threads, and once they are all in use requests get placed in an application request queue. It's here where you start seeing a performance hit, as your user's requests are getting blocked as there are no threads to take the request.

Once this queue becomes too full the server starts responding to inbound requests with service unavailable errors. The key to building scalable apps is figuring out where your bottlenecks are--what is stopping that worker thread completing and returning to the pool?

If the bottleneck is CPU usage, then the only solution is to throw more servers in the web farm. But more often than not, the bottleneck is I/O operations, like file system or database access.

There are three places where we can place code that can be converted to async operations to better scale an application: Pages, HttpHandlers, and HttpModules. If you have I/O operations happening in any of these stages then they are candidates for async operations.

An asynchronous page starts its processing on one thread, and finishes it on another. ASP.NET 2.0 contains a framework to allow you to do this easily. Specify an Async property in the @Page directive in your .aspx code, and call RegisterAsyncTask in the Page_Load event.

RegisterAsyncTask takes a PageAsyncTask object, which contains event handlers for the Begin, End and Timeout events that an async operation can have. Within your Begin handler kick off an asynchronous operation and you're done.

I don't have enough time to go into detail on the HttpHandlers or HttpModules (I'm rushing off in a minute to catch a plane), but they're pretty much the same. We kick off an asynchronous operation from a Begin method, and then complete the call in an End method--standard async pattern stuff.

This is about scalability, not performance--you may see a performance increase if your tasks are capable of running in parallel, but not otherwise

So how do you call an async operation from the Begin method? If your page just goes straight to a database (i.e. instantiates a SqlConnection directly), then there is a method called BeginExecuteReader on a SqlCommand that will do this for you.

However, I'm sure that most developers will have a layer in between the page and the SqlConnection, and in these cases you do need to do some more work. You'll need to expose the Begin and End functionality of the SqlCommand up to this layer so it can be called. Depending on what is separating these layers will of course dictate how easy or hard this is to do.

In an attempt to build scalable web apps in ASP.NET, some people will try to use threading to help them out. There are 3 wrong ways to do this, and 1 right way:

DO NOT

  • Use Thread.Start - it is an unconstrained use of threads that the architecture has no control over
  • Use ThreadPool.QueueUserWorkItem - it steals threads from the ASP.NET worker thread pool
  • Use Asynchronous delegates, for the same reason as above
ONLY use threads in a custom thread pool. There is sample code for a high performance thread pool available on the Wintellect website, unfortunately I don't have time to look to give a direct link, but I'm told there is one there.

And I guess that is it for my first Tech Ed. I've mostly enjoyed my time here, and it has certainly been a very valuable experience. There's a lot of new stuff here to mull over in the coming weeks, I look forward to checking some of it out in detail.

Random Musings of American TV

A voice over at the end of King of Queens just said "Closed captioning is bought to you by" followed by an advert which said "An Azo tablet a day keeps the yeast infection away" Pleasant.

I pity anyone who's been taking the diabetes drug called Avandia, which has apparently been linked to heart problems. There's an advert running very other break today. A really forceful voice shouts "Have you or a loved one taken Avania and had a heart attack or died from a heart related illness?! If so call 1-800-BAD-DRUG for a free consultation!" If you hadn't had a heart attack before, you probably would after hearing that.

The PBS channel on my hotel TV has been playing without sound for 3 days now. Not that I mind, I've not seen anything there I want to watch yet.

News channels here are all over the story that Paris Hilton has been released from jail after only 3 days. I guess no one cares about finding Kelsey Smith's killer now. I had to look online to get an update on what was going on.

It's Hurricane Week on The Weather Channel. No hurricanes are forecast though. Wonder what they're covering after the scheduled storm has failed to show up on time.

The strangest things get censored. I've heard the word "ass" get bleeped out, yet it's ok to show three witches casting a spell that causes a man to burn in front of your eyes in a TV show on at 6pm.

Is there really any call for a TV channel devoted solely to golf?

Thursday, June 07, 2007

Some Photos

Taken some photos along the journey and in Orlando. In no particular order:



What I think is ice or snow, viewed from about 40,000 feet over the Atlantic



The view from the Convention Center



I was hoping it was going to be "Prat"



But this being America, it wasn't. By the time the plane had done the "Jesus" bit the "Praise" was dispersing rapidly.



I have no idea what these are, but they're dotted all along the road. Probably pipes for the Morlocks.



Towers. Pink towers.



The (slightly askew) City Beautiful.



Nice view of the Comfort Inn from my hotel room.



Feeding time at Tech Ed.



The heavens open in Orlando, will this relieve the drought? I don't know, but switching the sprinklers on that morning was rather pointless now, wasn't it? ;)



What is it about IT folks and Ninjas? Funny though, more proof that Microsoft do actually have a sense of humour.

Tech Ed Day 4 - Understanding ASP.NET Internals

Understanding ASP.NET Internals was perhaps the most technical session I've been to so far at Tech Ed, and seeing as it's now Thursday and I fly home tomorrow evening, probably the most technical I'll go to.

Rob Howard presented a session that was both fast paced and very in depth based on my current knowledge of ASP.NET. As a developer I'm mostly happy if the applications I'm creating work as they should, but knowing what's happening under the hood is useful on many occasions.

However, as I wouldn't call myself a Web Developer (I'm much happier coding Esendex's Windows Services, where I know what the inputs and outputs need to be), I nervously took my seat in a packed session room this afternoon.

First up was an introduction to Rick Strahl and Michele Leroux Bustamante, both of whom have blogs relating ASP.NET's internals. Rob's blog also contains links to all the slides and code samples used in the presentation, so head over there if you weren't able to get in the session. I don't know if they were turning people away at the door, but the room was definitely full.

The first point of note is that you can host the ASP.NET runtime in your own processes--now why would you want to do that? Well, if you need to write tests for your pages then hosting an environment that can act as a very controlled mock up of IIS can be very useful. The code to do it also didn't seem that onerous either.

Download the code samples and check out the MyHost.cs file in the Host directory and see for yourself.

Then we moved onto the differences between IIS 5.1 and IIS 6.0 and how they serve ASP.NET applications differently. Check out the slides to see the diagrams for yourselves. I don't have PowerPoint on my laptop, so I can't say exactly which slides to look at. Should be pretty obvious though from the headings.

Rob made a remark about SQL Server session state that could be useful to some of you out there. When using session state in the default way, ASP.NET will read and lock on the way in and write and unlock on the way out. You can optimise this to only do a read if you are not updating session state, or disable it totally if you are not using session state at all. Read this MSDN article on session state best practices for more information.

Did you know about the @OutputCache page directive? This can be used to dramatically speed up handling of pages as the page is placed in memory on the server rather than back on disk. Check out more details on ASP.NET caching here and here.

Finally we have the HttpContext.Items collection. Say you have a bunch of HttpModules and HttpHandlers loaded that use the same information. You can set this information in the first module/handler that is loaded and shove it into the Items collection. Then in subsequent handlers or modules you can check to see if the object you need is already there and use it, so you don't have to recreate it.

This can be handy if you're doing a lot of the same database hits each time through your modules or handlers. Rob calls this "per-request caching" as the Items collection (along with the HttpContext) gets dumped on completion of every request.

So, a fair bit of information there. I strongly recommend downloading the code samples and slides to see for yourself. Contained within them is a control called EventLogs.cs (in the TechEd2007/Controls directory) that will render a machine's Event Logs to the screen. Apparently there's a company selling something exactly like this for $200, so if you want one that's free, and comes with the source, go check it out.

Tech Ed Day 4 - Hands on Labs and Virtual Tech Ed

I was due to attend a session today in one of the Interactive Theaters (which is kind of like a seminar, rather than the lecture-like Breakout Sessions) called Why Software Sucks, but as I got there it was far too full. All seats inside were taken, people were sitting on the floor, and the crowd at the entrance was about 5 or 6 deep (and all were taller than me), so I could neither see the slides and white board, and could barely hear the man speaking.

So I went to try my hand at one of the Hands On Labs, soft of like interactive computer based learning. You sit at a dual screen PC and have a document on one screen telling you want to do, and (in this case) a copy of Visual Studio on the other where you can work.

I first tried the Introduction to LINQ, as this has sparked my interest this past week. However, I soon found that this example was in VB.Net, not C#. No matter, I thought, I'll try it anyway.

So I tried to get to grips with the VB syntax, but after nearly 4 years of C# it's really not nice. I thought I'd pick up the basics from it, but the syntax is different so there wasn't much point. Plus, the section I really wanted to do (an example of using LINQ to SQL--which is using LINQ as a high-level method for accessing databases) wouldn't work as the example Northwind database wasn't installed on the machine I was on.

So not much luck with that one.

So I tried the Introduction to the Windows Communication Foundation. I can't remember the exact name, but it was pretty much that. This was in C#, so it was looking better from the start.

I found this to be a pretty well laid out introduction to WCF. It walked you through the steps of setting up the "Contract" (or interface), and hosting this both in a console app (or potentially any .exe process, like a Windows Service), and on IIS.

From this example it's clear that the actual code you write is relatively simple. There's a bit of a ball ache you need to go through from the client side in order to consume the service though. You have to run a command line utility to generate the proxy code and then add this to your project, but maybe this is integrated into Visual Studio 2008, or will be included as an addon to 2005 eventually.

The difficult work is in the configuration files. You need to specify all the things that will make the channels work here instead of in code. But this should be resolved if you follow the Stock Trader example previously mentioned this week, which shows how to load this information from a centralised database.

After this I stopped by Virtual Tech Ed, which looks to be a large projector screen playing interviews with various people who may or may not have hosted sessions. I didn't see it from the beginning, but my attention was drawn to a talk about developers and security.

There are certain things you can do in code that make it easy for a hacker to gain access to your system--common things like script and SQL injections for example. Instead of relying on the developer to know about all the latest hacking methods being employed, this talk was about treating security like just another programming component.

For instance, if you're writing a web application and you need the user to enter some text information, you'll use an ASP.NET TextBox, you don't write your own text box. Likewise, you don't create your own encryption algorithms--you use ones that are already available.

So why does the developer have to know every single way to secure their systems if there are utilities and components out there that can do it for you?

The product highlighted that can do this is called DevInspect, from SPI Dynamics. I've not tried it out, but the sales pitch was certainly interesting. It was said that this tool can not only tell you when you are doing something wrong, it will tell you the right way of doing it. I'm sure there must be alternatives to this particular product, but what I found interesting is the concept of handling your security this way.

As with everything security related you can't, of course, rely on just one counter measure, and you can't expect any one component to tell you exactly how to do something the right way.

But if we can use tools like this to highlight where vulnerabilities lie, then it's one less thing to worry about. It can only make our own systems, and our customers' data, more secure.

Tech Ed Day 4 - More on C# 3.0

Sessions today started at 8am, but I mistakenly turned up at 8:30 as that's when previous day's session have started. So I unfortunately missed the first half of Microsoft Visual C# Under the Covers: An In Depth Look at C# 3.0.

I've already written about some of the new features that were covered in the LINQ session from Tuesday, but I was hoping to find out more today. Judging by the summary given at the end though I don't think I missed anything that wasn't at least introduced in the LINQ session.

The thing that I walked in on though was usage of the "yield" keyword. This isn't a C# 3.0 keyword though, it's in 2.0 already, but I didn't know about it. I found this article on what it is, and it's pretty useful if you are providing your own implementation of GetEnumerator.

Another thing mentioned was Extension Methods. The idea of these is that you can increase the number of methods on an existing type to include the operations that you need. This article goes into some depth on it.

Then the final thing that I heard about was Partial Methods. These are similar to the partial classes that we have in C# 2.0 but obviously apply to methods. They seem to be hooks in the code for allowing other classes to provide implementations for. I found an article that explains it somewhat, and explains where they could be used.

Another interesting point though, was that the C# 3.0 compiler is backwards compatible with C# 2.0 code, so the new compiler can be used straight away. I'm not sure where to get it from to try it out though, or whether it's stable enough for production code yet.

If anyone does know, leave a comment.

Wednesday, June 06, 2007

FxCop

I forgot to mention that the Code Style and Standards session the other day mentioned an application called FxCop. This is a code analysis tool that will point out potential problems with your code that you may not have been aware of before.

I mean to have a look at this in the coming weeks, but I don't have any code on my laptop with which to play with it.

Looking at the website it appears to be a free download.

Tech Ed Day 3 Part 2 - .Net Code Protection

Esendex don't currently ship any .Net applications to customers, so protecting the code which is inherently present in .Net assemblies has never been an issue for us. Obfuscating the code that runs on our servers would just be a needless overhead, and a maintenance headache.

But that might (dare I say "will") change in the future, so I thought I'd get a heads up of what is required by attending the .Net Code Protection session today.

A good rule of thumb, we were told, for deciding whether or not you need to obfuscate your assemblies is to ask yourself if you would mind if your source code was published. For open source applications this is a no brainer--there's no need to obfuscate if you're already giving them the source.

But if that same application contains things that could be damaging to you if discovered by an external entity, then you should definitely obfuscate. This includes proprietary intellectual property you don't want competitors accessing, or if access to the source code would highlight something about your IT infrastructure--such as the database schema, or file system.

So how do we obfuscate? Well Visual Studio 2003, 2005 and now (the newly named) 2008 (previously codenamed "Orcas") all ship with a copy of Dotfuscator Community Edition. Apparently if you're a registered user of this edition you can now get an enhanced version that includes a number of new features.

Dotfuscator doesn't need the source code, you just pass it the built assembly and let it do its stuff. But what does it actually do?

It can

  • Rename Identities (like variables, method names, type names, etc)
  • Remove Metadata
  • Mess up the perceived control flow
  • Encrypt string literals
The effect that this has is to make it difficult to decompile the code using tools such as Reflector. It won't stop the most determined hacker getting your code, but they will need to dedicate a lot more time to it in order to get it.

Messing up the perceived control flow is probably the most interesting point there. What this does is take how the intermediate code is due to execute, and splits it to look like spaghetti code. It still executes the same, but trying to figure out where everything is going will take much longer.

And that's what obfuscation comes down to. In its basic form all it is doing is making a mess of the assembly in an attempt to make it so difficult to decompile that people won't bother. It can never make your code hacker-proof. It's like installing a visible alarm on your house. You've not made it any harder to break into, but burglars will probably choose a house without an alarm first.

Dotfuscator now comes with ways of embedding information into your code that will detect and react to attempts to tamper with the code. In your code you specify attributes around the places you want to protect. One of these contains a delegate that is called when a tamper check is fired.

In this delegate you can do whatever you want--close the application, notify a web service, disable certain commands. It was even joked that you could format their hard drive.....at least I think it was a joke ;)

Of course, if someone did manage to decompile you code, they could just strip out these tamper sections, but that means they would have had to fully decompile it, which the obfuscation will make difficult.

So a decent session to go to, with plenty of information to chew on for when (or "if" ;)) Esendex decides to ship .Net applications.

An Englishman in Orlando

Yesterday's escapades really caused the culture shock to be felt for me. The only thing I've found that is the same over here is the language, mostly. I've had to repeat myself a few times but maybe that's because I mumble.

Even the toilets flush differently. UK toilets just pile lots of water to force whatever you've done down the drain. Here it seems to suck everything out the bowl then fill with water again. The way over here is far less primitive than the UK way, but US toilets have so much more water in the bowl. Makes the end result look a whole lot worse...

You have to manually flush urinals. Now, this could be a water saving feature if it were not for the fact that urinals over here have standing water in them. So after you've relieved yourself you can see it. You wouldn't have to flush them if your business just went down the drain like in the UK.

And sales tax! Want to buy that $9.99 piece of plastic as a souvenir and only have a $10 bill? You'd like to think you could buy it, but you can't unless you have enough money to cover the 6.5% sales tax tacked onto seemingly everything.

We have sales tax in the UK, but the sticker price includes it. Why not over here? Don't some people pay tax? Is the sales tax optional for some retailers?

So many questions. Didn't think I had to do any homework before coming here after watching 100s maybe even 1000s of films based in the US. I was wrong about that one.

Pedestrian crossings are another thing. In the UK you press a button at the side of the road, wait for a green man to show and when he does you can cross, safe in the knowledge that the traffic is waiting at the red lights.

Not here, and speaking to a man from California that I met, not there either, so it's not just an Orlando thing. Here you press a button and wait for the signal to cross. That signal doesn't mean it's safe to cross, just that its safer.

Say you're at a crossroad, and you just want to cross one road. You press the button and wait. Eventually red lights will show along one stretch and you get the signal to cross. But the traffic on the other road isn't stopped, and you're left figuring out whether you've got right of way over the cars turning into the road you're crossing.

The man from California said that pedestrians do have priority, but that's not what I've seen.

Cars are king over here. Everything is laid out to be car friendly so much that there is little to no consideration for pedestrians. On yesterday's ill-fated walk back from the mall at least half of it had no pavement.... sorry, sidewalk. Well actually "sidewalk" is more appropriate as it's very descriptive. You have to walk at the side of the road--there's rarely a raised pavement to follow outside of the new, artificial roads around the hotels and Convention Center.

You can't walk anywhere really, everything is just too far away for it in this heat (and rain--the heavens have opened here, and I no longer wonder where all the lakes come from). You need a car, or a taxi, or a bus. But cars need to be hired (and I wouldn't dream of driving over here, even in someone else's car), taxis aren't easily hail-able, and bus stops don't have timetables or route maps on them.

How visitors are expected to get around is beyond me.

Coming back to traffic--red lights? You stop at them right? Surely you're supposed to, that's why they're there above the lane you're in. But people just inch forward through red lights when they want to turn left at a crossroad. I've seen it so many times I'm wondering if it they're actually allowed to do it and the red light is just a signal to give way (or "yield") to traffic already on the road.

So many differences, and US TV is too. Shows seem to go in this pattern:

  • When the credits begin to roll at the end of a show the screen will split, and the starting credits of the next show will show in the other half.
  • When the credits have done (or the intro and credits have done in the case of CSI, NCIS, and Law and Order: SVU) they'll go to an ad break.
  • The show comes back and goes into numerous breaks along the way.
  • The credits will roll, screen will split and the pattern repeats.
This annoys me a great deal. It assumes I was either watching the show before the one I want to see, or that I know what will be on at that time. If I switch on the TV on the hour chances are I will see ads, but expect to see a show coming up. As the intro has already run I don't always know what the show is I'm watching.

Needless to say I haven't watched much TV. Baseball is the only thing I've watched for any duration, but that's only because I find it quite intriguing. Seems like rounders with spitting and lots of hand signals.

Tech Ed Day 3 Part 1 - Hidden Gems in ASP.Net 2.0

I have so so many notes on the first session today I thought I'd put it in its own entry. That session was Hidden Gems in ASP.NET 2.0, a packed rundown of many features that should really be documented better.

For instance, we all know that you can use Ajax technologies to post back sections of pages, so you don't have screens flickering and the like. ASP.NET 2.0 has support for callbacks right out the box, without having to use any Ajax specific stuff. It's a lot simpler to do with Ajax, and Ajax is more flexible, but if you only need to update a few strings or something then ASP.NET 2.0 can do it. There are some samples you can download for all of these examples.

A feature that I didn't know about was the new $ syntax for ASPX code. This allows you to specify your own prefixes through custom expression builders. I won't go into detail on this as the MSDN article above covers it better than I could here.

And then there's encrypting your Web.config file to keep hackers away from sensitive information such as SQL connection string. You don't have to worry about doing this yourself, ASP.NET can do it for you.

"Why would you bother doing this?" I hear you ask, "IIS won't serve a .config file to the browser!" Well, no it won't. But through administrative errors it's common for this to happen.

Say you or a network admin needs to make a change to the Web.config. How common is it for that person to make a backup of the existing one before changing it? And what if that person copies the file to the same directory, but just renames the file to something like Web.config.bak, or Web.config.old? Will IIS still keep it hidden for you? No, IIS will push that file down to the browser just like it was a regular file.

Hackers will try every permutation of Web.config backup names once they've found out you've got ASP.NET running on the server. You can remove the risk of these admin errors by encrypting your sensitive sections of the file.

You can use the aspnet_regiis.exe utility with a /pef switch (for Protect Encrypt File) and specify the Web.config file to encrypt, or you can do it programatically with SectionInformation.ProtectSection.

Have you heard about Adapters in ASP.NET? No? Me neither.

These latch onto controls and intercept the markup that they are sending back to the browser. The intention of this originally was to allow developers to plug in a WML adapter to a normal ASP.NET web app to allow WML rather than HTML to be sent to the browser for mobile applications. Unfortunately this adapter never made it into the final release, but the architecture for doing it is still there and is accessible to developers to provide their own adapters.

Apparently there is a CSS Control Adapter Toolkit which just takes the form of an adapter that just changes the markup generated to be "more CSS friendly". I'm not sure what this means as I've not spent that much time with CSS, but maybe Neil will be able to shed some light on what the CSS problems are currently and why this new adapter is needed.

ViewState getting too big to push to the browser? Want to store ViewState in session state? The current way I know to do this is to override methods in the Page for loading and saving ViewState. An easier way is to use adapters.


That's all you need. Maybe Neil would like to take a look at that one too ;)

And now Virtual Path Providers. You know how ASP.NET 2.0 web applications can be built in a way that you don't need to have the ASPX files on the server--they're included in the binary. Do you know how they do that? More importantly, can I extend it to store my ASPX pages somewhere else?

With Virtual Path Providers you can. All registered Virtual Path Providers will be interrogated on every request to the application. Here you can intercept the path being requested and feed something else to the browser.

So lets say you wanted to read all the pages from a database, or maybe they're all encrypted. Derive a class from VirtualPathProvider and override the FileExists and GetFile methods. There are similar methods for directory requests too.

In the FileExists method check the path being requested and return true if you can handle that, or return Previous.FileExists if not.

Remember what I say about "all registered Virtual Path Providers"? Well just because your provider can't handle that file doesn't mean that another can't, so let someone else play if you don't want to. That's what Previous does, it's a property on the base class you're inheriting from.

Same goes for the GetFile method, do your processing to get the file (look into a database, or decrypt a file or something) and return a VirtualFile object that contains it.

So, now you can have dynamic pages loaded from a database but still have nice looking URLs instead of putting things in the query string.

To tell the web app about the provider though is a little different. Because the provider needs to be loaded prior to the entire Web.config file being loaded you have to declare the provider in a static method that the application can reflect into when the app starts.

Just provide a static method in your application called AppInitialize that includes the line HostingEnvironment.RegisterVirtualPathProvider, and your provider is registered.

And moving swiftly onto Session State Partitioning. As web farms grow in size, the database feeding the session state becomes the bottleneck for performance in the application. ASP.NET allows you to spread this load across a number of databases relatively simply.

Create a class that implements the IPartitionResolver interface, providing an implementation for the ResolvePartition method. Use this method to decide which session state database you want to use and return the connection string from it. Then in the Web.config file, where you would normally enter the connection string parameter in the SessionState element, intead put a reference to the class you've just created in the partitionResolverType attribute.

Now, how this handles the scenario of one server starting the session with a connection to 1 database, and another server finishing it with a connection to another wasn't made that clear, but I'm sure there must be a way around that.

As an example of web farm sizes that require this, it was said that MSDN ran on 4000 servers, and Hotmail on 1000. So larger than most web farms will be, but if the session database is proving to be a bottleneck in your system, then partitioning in this way is a possible solution for you.

The final part of this session (didn't I say it was packed?) touched on asynchronous pages. There is a glass ceiling on ASP.NET web apps that dictates how many requests it can feed at a given moment, and that is the maximum size of the ThreadPool. If threads are hanging around waiting for lengthy operations to complete, then that's one less thread that can respond to incoming requests.

As your application reaches this limit you'll see slowdown. If it passes this limit you'll see 500 errors, as the server will not respond to the request. You can get around this by using asynchronous pages.

Now, the session overran at this point, so an already fast paced session went into overdrive, and I'm afraid I didn't catch all the details, and there was no time for questions at the end. However, there is a session on Friday I think that covers just this section in detail, so I'll try to get to that.

Anyway, what it involves is splitting your processing into an asynchronous programming model, where you begin processing, do that processing on another thread, and then end processing by calling back. I assume that these new threads are taken out of a different pool though, as if they weren't it wouldn't give you an improvement at all.

Like I said, I'll try to get to the in depth Friday session for more information.

Tuesday, June 05, 2007

Tech Ed Day 2 - Dead Phones, Dehydration, and LINQ

Day 2 kicked off with a rapid introduction to The .NET Language Integrated Query (LINQ) Framework. This first served as an introduction to C# 3.0 and covered many of the new features which are due in the next release.

Are you fed up with having to expose your private/protected class variables behind public properties? Are your get and set methods so simple that they only return or set a private/protected variable? Wouldn't it be great if the language could help you out so you don't have to type so much code?

C# 3.0 allows you to just provide the outline of the properties, and the compiler will fill in the blanks for you.

Do you hate having to type something like:

ReallyLongClassName myClass = new ReallyLongClassName();

Shouldn't the compiler know that you'll probably want myClass to be of type ReallyLongClassName?

C# 3.0 introduces the "var" keyword that does just this. You type less, but myClass is still strongly typed properly.

C# 2.0 introduced anonymous methods, which allows you to put method code inline instead of having to declare a full delegate elsewhere. C# 3.0 expands on this with Lambda functions which will result in even less code having to be typed. I'm not entirely sure how these work yet, but I certainly mean to look into them in future.

So they're some new features coming in the C# 3.0 language, but the session was about LINQ, not C#.

LINQ is a declarative programming model, where you don't code in how you want to do something, you rather tell it what you want. The example given was iterating through a collection, performing some logic on each item, and then adding that item to a collection. Something that is quite common for most systems.

Instead of setting up a loop, comparing values, instantiating another collection, adding to it, and then returning it, LINQ allows you to just say "From this collection, where this is true, select this given object and return them all".

There's just too much to go into detail with here, so I'll just leave this as a reminder for me to look this up more in the future. But as LINQ works with object collections, relational databases and XML it will totally change how we use collections in code. Also, as we're declaring what we want rather than how to do it, the framework can optimise this to run across the new multi-core processors. It would be very difficult to do this manually, but it's something you can get for free with LINQ.

Next up was a session on best practises for Team Based Development, which I personally found rather disappointing. It seemed to be aimed squarely at large teams with huge budgets, and wasn't particularly relevant to the smaller teams that I'm used to.

A large portion of the session concentrated on communication between the various roles within the team, like developers, testers, architects, etc. Esendex doesn't split roles in this way, and all developers work out of the same office, with no walls between us, so there's no problem with communication.

This concentration on the larger teams certainly doesn't seem to promote agility in development. How can teams react to change if they're working towards deadlines that are months in the future?

The final session for today was a discussion on code standards and styles, but again this didn't seem to tell me anything I don't already know. I knew the session wasn't going to help when some of the attendees began discussing whether or not using correct spelling should be important in code. I agree with the one guy who made the comment along the lines of "if you can understand what's going on, why does it matter?".

Unfortunately that was all I was able to attend today. This morning I noticed that my mobile phone's battery was running low, but my UK charger wouldn't work through the adapter I have. I needed to buy a US charger, so I headed off to Florida Mall, billed as the largest in Orlando, or was it Florida, I can't remember.

Well it wasn't particularly large. Mostly it's all on one floor and there didn't seem to much variety to the shops. There's one bookshop in there, while you can't walk for 2 seconds without passing another shoe or clothes shop. So not really much use for me. I did, however, find a Radioshack that stocked Nokia chargers, so I attempted to head back to the hotel.

I had got a taxi from the hotel to the mall (after taking a shuttle bus from the Convention Center to my hotel to drop my laptop off), and just assumed I would be able to get back the same way. Well, I waited with a number of others near the taxi sign for what must have been 15-20 minutes for no taxis to turn up. So I though I'd have better luck hailing a cab on the road, so headed that way.

Again no cabs, so I started walking in the direction of the hotel keeping an aye out for taxis, but none came. It was a pretty stupid thing to do in hindsight, but I carried on walking anyway. Then I got lost, and ended up on the wrong road so I had to head back. TV weather this morning said the temperature at around that time would have been around 91 degrees, and I was in no shade and had no water.

Like I said, stupid in hindsight. The heat was pretty unbearable, but I saw no other way of getting back at the time. Luckily I soon found a Walmart where I could buy a drink, and from there a nice lady in Customer Services called me a cab that arrived a few minutes later.

By the time I got back it was 8pm and I was in no mood to head back to the Convention Center for the later breakout sessions. So I'll take up where I left off tomorrow, and try not to do anything stupid anymore.

Monday, June 04, 2007

Tech Ed Day 1

So here I am in the land of the US, where big cars, wide roads and really thick blades of grass seem to be the order of the day.

Tech Ed started in anger this morning as Bob Muglia (Senior VP of Server and Tools Business at Microsoft) kicked off a rather impressive Key Note that involved an introductory video with Christopher Lloyd decked out as Doc Brown from Back to the Future (complete with the DeLorean time machine). The short film mocked the previous Key Notes tendency to promote Microsoft's "visions" for the future (called MS BS by Lloyd), and made inside jokes about Microsoft Bob and the Office paper clip among other things ("It looks like you want to scream" was the question that it asked I think)--Lloyd's joke refering to Bill Gates' alleged 640K of memory quote raised a few laughs among the attending folks.

After this the Key Note departed from the usual "vision" statements and instead concentrated on giving introductions to up and coming Microsoft products and how they can be used in the real world. Muglia touched on Windows Server 2008, Operations Manager 2007, SQL Server 2008, the 2006 R2 release of BizTalk, Visual Studio 2008 and Silverlight in the Key Note address that was delayed most likely to the massive queue of people waiting to register for their conference ID badges.

This delay caused most other scheduled sessions to be pushed back by about 15 minutes, which means that I only found the food hall at about 2pm (and consumed what I believe to be some kind of beef), contained within a huge expanse in the middle of the South building. I once attended Internet World back in the UK in Earls Court, and this room alone is about twice the size of that venue (remember what I said about things being large?)

So, onto the sessions for today:

Ron Jacobs on the Architecture Landscape extolled the virtues of Test Driven Development (TDD) and separating user interface from business logic. We develop using XP as a methodology at Esendex, so instruction on how to separate logic so it can be tested wasn't that much use to me.

What was useful was his statement that unit tests should be as simple as possible and be able to run completely contained within themselves. So if a test required a look up into the database then that test should be able to run if the database was not available. The way this is tested is by using "mock" objects, and we were pointed in the direction of NMock. I haven't had chance to look into this yet, but plan to later.

What I found amusing though were some questions in the Q&A section after the talk. People asked how to convince their bosses that TDD worked, as it seemed like working backwards. It's a valid question in a way, and I think I only find it amusing because we've already successfully adopted TDD at Esendex. TDD and pair programming has definitely reduced our defect rate in produced code and has also increased our performance in quite a few places.

I don't have any concrete metrics for anyone wanting to convince management on adopting TDD, only to say that it works.

Then there was the lunch session snappily titled The .NET StockTrader Application Service-Orientation Case Study: Building High-Performance, High Reliability Systems with .NET 3.0 and Windows Communication Foundation. This was interesting on many levels, and centred around the StockTrader MSDN example. One thing I need to look into in this example when the source is released later this week is the concept of centralising your configuration files to be read from a SQL Server. The problem this solves is one which the operations team at Esendex face every day.

Reading settings from a config file is fine in stand-alone apps, but when (as Esendex does) you have multiple Windows Services running on multiple servers, keeping that information consistent is a problem. Internally we've talked about how centralise the information a number of times, so it will be interesting to see how this is managed in the StockTrader example.

This session was my introduction to WCF, so much of it was new to me. What wasn't new though was the concept of different services communicating with each other, and that's what WCF was designed to do. WCF is the plumbing that enables services to talk to each other, and it masks it all so that the developer doesn't need to worry about exactly what transport mechanism or encoding the underlying channel is using--it is all configurable.

That session really should have been put on before my final one of today, which was Programming Microsoft Windows Communication Foundation: A Developer's Primer. In this session, fast-talking "Software Legend" (tried to find a definition, but can't) Juval Lowy gave the ultimate lowdown on WCF. This session was probably the most interesting today, and certainly the most entertaining (notwithstanding the time travelling Key Note, that is).

What was most intriguing was that although WCF is part of .Net 3.0, you can use it in .Net 2.0 if you get the WCF stuff from .Net 3.0. Now, how you do this I'm not sure yet, but we currently develop on .Net 2.0, so if we didn't want to make the jump to 3.0 yet we could still use WCF.

WCF is described as an SDK for building SOA applications: it is the plumbing between services. Nobody cares about plumbing, Lowy says, so why spend any time creating it?

Why indeed? And with something as rich as WCF there really is no need.

WCF is a thought process shift. Syntactically WCF is simple, a normal C# developer will be able to look at the physical code and understand what it is doing. The learning curve is figuring out how to implement this in your system.

There is a clear separation between interface and implementation in WCF. It even catches any exceptions thrown so that the interface is maintained. The service exposes metadata that describes the service, and the client creates a proxy for this and uses it.

In many ways its similar to web services, but without the restrictions that web services have. Web services can only be over HTTP, and use XML documents. WCF doesn't have these restrictions: you can have a binding that specifies TCP and binary if you want. In fact this binding is replacing the old .Net Remoting concept.

The in-process calls that WCF allows is also quite interesting. Communicating through WCF removes any assembly specific dependencies that an application might have. In any large system you will have assemblies referencing specific versions of other assemblies. If these dependencies ever change the others need to be rebuilt, or the assembly configured to use a different version if the interface doesn't change.

WCF allows you to separate these completely. Now I can have one assembly that communicates through WCF to tell another assembly to do something. I don't have to worry about how it is doing it, or how WCF is sending the messages, or even where the assembly is hosted. I can build a completely scalable, totally decoupled system that is easy to maintain.

WCF is certainly something to look to for future systems.

That's a lot of stuff for just one day....

Friday, June 01, 2007

Favourable GBP -> USD Exchange Rate

UK travellers to the US can currently get nearly $2 for £1. So shopping over there is going to be quite cheap, and it makes currency calculations nice and easy :).

I think getting my hands on some US currency is the last thing I need to do before I head off to Tech Ed on Sunday.

I hoping to keep this blog up to date with what happens over in Orlando next week. It'll act as both a reminder of what I've done, and hopefully minimise the amount of talking I'll have to do when I get back ;)