A DPI, networking and joy of technology blog.

Saturday, August 23, 2008

Net neutrality and strawmen

Reading The Precursor Blog by Scott Cleland (a blog labelled 'policy, markets and change'), it has some interesting commentary on Comcast & the FCC. On a factual level, I can't agree with him. On the other hand, I can't disagree either. It should be said, first and foremost that the guy is biased - linking to an industry issued Q&A on Net Neutrality from every page on the site is a bit of a giveaway.

Starters, I think it's not very nice to say "Net neutrality is thisandthat. As such, anyone in favour thinks this - and thus, by extension, are wrong. By the way, the FCC pooped on you. Neeners, neeners.", no matter if you wrap it in long words with many syllables and three levels(!) of bullet points. It's a pretty ugly strawman and I hope you're proud of yourself, Cleland.

Why a strawman? Simple. "Net neutrality" means different things to different people or interest groups. Is this an ideal situation? No. Does it make it easier to even discuss the issue? Definitely not. Does it allow for pretty cheap rhetoric? Yes. And in my opinion you better stay off that particular bottle if you actually want to commit anything but canned line noise.

That said, the guy does have some valid points - the main one being that there's some semblance of approval for reasonable network management from the generation direction of FCC.

"The system works and that there is no need for legislation or regulation;"

In a word, no. I think it's fair to say that the threat or uncertainty of eventual legislation has affected the planning or policies of major ISP's in the US. In a way, the threat of legislation is what makes the system work.

Don't get me wrong - I see business cases for extremely restricted and cheap Internet service, I see business cases for wide-area Wireless ISP's and both cases would probably need to stray pretty far from network neutrality (by any definition of the term) in order to even work out.

The difference is that 'reasonable' would be applicable in those cases. It's reasonable to prioritise interactive traffic slightly to ensure that the shared media is feeling somewhat responsive. Killing BitTorrent in one of the messiest ways conceivable - I'm looking at you, Sandvine - is another ballgame. The FCC seem to be in agreement. As for the rest of the world, no other Sandvine customer tried anything just as messy and most other traffic management/Deep Packet Inspection kits out there offer way better facilities for managing congestion (Disclaimer: I work for Procera Networks, Sandvine is their competitor. Opinions here are my own.)


"Given that: over half of Internet traffic is P2p and ~90% of P2p traffic is illegal piracy per the US PTO; given that 40% of email is spam per the Spam Filter Review; and given that 28% of pay per clicks of the large search engines are fraudulent per Click Forensics; the majority of Internet traffic is not protected by the FCC's principles and can be legally blocked."

Great - so let's block all P2P, e-mail and web browsing. Problem solved.

Statistics are good and all, but I don't think they're very valid in this case: Let's look at Instant Messaging - that's also (predominantly) P2P. In terms of transferred bytes, I'd say that a decent chunk of the traffic is copyrighted material. Same goes for FTP.

Light throttling is one thing - it makes sense for file transfer apps in some cases (P2P file transfer protocols are usually very good at hogging a decent sized chunk of the shared media. This can be at the expense of other applications) - but blocking is another deal altogether.


"Unable to sustain radical hardline non-discrimination absolutism in the face of facts and a real governmental process, the debate has shifted to what is reasonable -- and on that point obviously reasonable people can disagree."

The debate really hasn't 'shifted' in any direction at all, I'd say. Again, net neutrality proponents have a wide array of opinions - yes, some people really do whip the "all bits are created equal" drum, but you can't categorically say that's what network neutrality is.

Or, if you so prefer, "My unequivocal findings in the line of researching and opining on this issue is that the claim that there exists either a collectively ratified or de-facto definition of the aforementioned terminology is moot. As such, a sweeping characterization of its proponents is premature at best."

If you excuse me, I'll go wash my keyboard out with soap now.

Monday, August 18, 2008

On streaming video

This in part a response to a paper by Andrew Odlyzko, but at the same time an interesting topic all by itself. The usual disclaimer applies - I'm employed by a DPI peddling corporate entity, Procera Networks. The opinions are my own and this is written on my own time, but I'd be hard pressed to say that my experiences from work hadn't affected them. I'm sorry about the length, but since 'DPI' is a very battered term and 'net neutrality' as well, I rather go the extra length to explain my rationale so you can agree or disagree with that rather than just assume we mean the same thing when we say net neutrality or shaping.

Commenting on the paper referred to above, I think that Mr Odlyzko overall got his head screwed on right with regard to some of his main points: It's perfectly possible to stream data over TCP and play from the buffer in the majority of cases (Whether or not it has to be quicker than realtime is a whole other ballgame, but it isn't really that relevant - the question is whether it works and is an acceptable solution for watching a movie or clip over the net - it is) - also, I'd be quite inclined to agree that voice carries a much higher revenue per byte transferred than video.

There's a few things I disagree with as well. Content is king, but it's the content that's important to the user that is king, no matter if it's a YouTube clip, a torrent download, World of Warcraft or your local news page. You do have different expectations about delivery times and availability for different content, though - so it's not always about impatience. I also don't agree that the ISP's motivations are truly nefarious (even though I'm guessing I won't be seen as a credible source there if you think that they are.. :-)
Finally, streaming (be it pig streaming over TCP + buffer or real-time streaming), is a bit of a challenge, but not necessarily because of realtime requirements.

Why video is hard
This one is best answered by two words: Sheer volume. No matter how you transfer it, it's a whole lot of data. As faster access gets cheaper, users expect more and more of it too. I regularly click the HD links when watching trailers - that's a cool 132 MB for two minutes eight seconds - or 8.25 Mbps of access speed eaten while I download it if the network's quick enough to support realtime. Your average DVD torrents would clock in at the vicinity of 750 MB each - or several GB for the HD rips out there. Given the choice between DVD and HD, HD is a very popular option indeed. For torrents, you can expect the upload utilization to be about as high as the download one, at least for users who are serious about their movies.

So in short, it's not really about fancy-schmancy streaming methods - it's about hauling a metric shitload of data across the network - and as the networks improve, so will user expectations and demands, thus further increading the load. This is true both for streaming (whatever flavour of it) and torrents.

Access networks & distribution networks
There's three pipes that gets interesting when we're talking Internet and bandwidth - the bandwidth between you and your Internet Service Provider (access network), the bandwidth between the providers' equipment that you connect to and their core network (distribution network) and finally, whatever capacity they have from the core to other parts of the internet (peering).

First things first - bandwidth is expensive, make no mistake. That said, it's relatively easy to add peering capacity since you do it in a very limited number of points, and any new peering tend to take the pressure off the existing points as well. It's usually not a bottleneck, especially for the big players. For a smaller ISP, it might well be (see 'expensive' above)

As for the distribution network, you might have issues, depending on your technology already in place. Say you have a GigE fiber pair - that's 1000 Mbps up, 1000 Mbps down. Anything dangling off that distribution point shares the available bandwidth there. GigE is relatively cheap, anything higher might require an investment in new equipment (that groks 10GigE) - and population density will affect things here. Japan and Korea - both places oozing with cheap and good bandwidth - aren't exactly sparsely populated. Less density = more equipment required per user. And things tend to be priced in a manner so that three boxes covering 1000 users each well could be as much as one box covering 10000 users.

Finally, for the access network, the available bandwidth varies greatly depending on the technology is in use. In some cases - Cable modems, Wireless Ethernet and 3G being notable examples - the bandwidth is also shared among a number of users, sometimes several hundred.

This is where you start having issues. DOCSIS/1.1 network? 38Mbps downstream, 9Mbps upstream, shared among a good number of users. My HD trailer above would chew 25% of the available downstream for the building or area while I was loading it. Even in a DOCSIS/3.0 network (which you won't see in that many places thus far), you'd at least notice the load as a slight peak off the baseline. And that's from one user.

In reality, you'd have to look at aggregates to get a good view of what's going over a network. I'm not watching trailers 24/7, but I'm not the sole user either. 10 HD-1080 viewers could well chew up 50% of the available downstream even on a DOCSIS 3 network. It's a fine line to walk between cheap (= many users per loop, sharing the costs) and responsive even during peak (= enough bandwidth to cover even the peak requirements without a sweat). Or, in the case of Wireless and 3G, between useful/restricted in many ways and useless/totally unrestricted.

To give you an idea about aggregates, I present the following screenshot. It's a shot of a Swedish PacketLogic just after peak on a weekday and represents just over 1000 active hosts - i.e hosts with at least one connection of some sort going, be it a DNS query, an IM convo or a torrent download. Mixed fiber, DSL and Cable users. I'm not trying to make a point, just provide some sort of numbers.
Do note though that there's really no such thing as an average network and this can't be seen as a very representative for 1000 machines anywhere on the net.




Introducing fairness
So if we fly by the assumption that there will be some sort of congestion somehwere in the network at least at some times or at some point in the future, the question becomes how do we deal with it? There isn't any really easy answer to that, but let's have a look at some of the effects and options..

For UDP (many action games, some video streaming, DNS, VoIP), congestion will manifest itself as lag, jitter or choppiness. Much, but not all, UDP tends to be used by fairly interactive apps where this will be noticeable by the user. If a DNS query is dropped, it might be a short while before the system tried to resend it, meaning a few more seconds worth of wait before that web page starts loading. Or a missed shot in Quake.

For TCP (Web browsing, downloads, most P2P, many MMORPG's), congestion will make TCP cut the tranfer speed noticeably, and it takes a while for it to ramp back up. If you're doing a 25 man raid in WoW, it's perfectly possible that this will manifest itself as sudden choppiness that'll last for a while before returning to normal. If the congestion remains, your lag goes up for the duration and healing just became a whole lot more difficult.
If you're downloading a torrent, however, you'll be connected to a good number of hosts rather than just one. Even if a number of them takes a hit, a number of them won't, and frankly, you're not sitting in front of your monitor just waiting for the torrent to finish. The BitTorrent network is very good at utilising bandwidth, sometimes at the expensive of other apps.

So how do you deal with the congestion? There are several ways.

One way to prioritize is to do it by the service in use and from a pure customer happiness perspective, it makes some sense. 49 WoW users at 10Kbps (average) who are statistically very likely to be glued to the screen chew less bandwidth than one moderate BitTorrent user. So if some pipe is becoming full and you need to drop something, do you tell the 49 highly interactive users to back off (dead warriors and cries of agony), or do you tell your BitTorrent users to back off, inducing a very minor bump in the wire that would be pretty damn unnoticeable? This might sound like a very contorted example, but it's actually not too far from the truth - a very minor 'priority lane' for a few given protocols can make the experience for those users a lot better without degrading the transfer speeds for bulk data more than a few percent. (I realize that the main gripe with DPI and P2P is where the P2P is shaped down to abysmal transfer rates or blocked altogether. While there are cases where this is done for pretty boneheaded reasons, it's also perfectly possible that a segment is oversubscribed and that letting P2P run rampant would mean that the P2P wouldn't be much faster and a lot of other things would be fairly slow. I'm not going to defend this other than by saying that generally speaking, your ISP do want to give you the best possible service, even if they're not exactly stellar at planning. You don't bite the hand that feeds you.)

If you don't want to go down that particular alley, you can control the available bandwidth for a user based on total bandwidth transferred. Transfer a lot during a 24 hour window and your transfer speeds get knocked down to a lower tier. Decent equipment can implement this as a rolling window - so if your peak your bandwidth usage is average-high but your average is average-low, you'd likely not get affected - and if you do, it's a pretty temporary state. This also means that heavy P2P'ers would be pretty much living in the lower tier.

You could also go protocol agnostic but just plain relegate the heavy users into a lower priority tier than the less heavy users. Less bandwidth available for the heavy users at peak hours since the more casual users would be doing their stuff (depending on the gear used to shape, it'd show up as plain slower downloads or pretty annoying packet loss)

There's also the option of just plain selling bandwidth you can use 24/7 unhampered. This might mean that you get 512/128 - or way less, depending on what technology you're using to access the net (DSL, Cable, Wireless, etc) and you're likely to pay more or much more than you do today. Not very encouraging and forget any service that depends on high available bandwidth.

Or, you can just charge per megabyte transferred. I'm not too keen on this one myself, but it'd be a possible service model. Sadly it'll likely mean that users would be reluctant to use bandwidth intense applications - even casual YouTube'ing chews quite a bit of bandwidth - and people can get quite nasty bills out of the blue if they're not aware of how much their apps chew or if they get some talkactive malware. Forget heavy P2P. Cringe at software updates.

There's other ways as well. But the bottom line is that as an ISP, you probably will run into congestion somewhere in the network at some point and you'll have to deal with it. In some economies, you have to live with the congestion and just deal with it best as you can (there are places in the world where a 4Mbps line costs an arm, a leg and the better part of the torso) - this is also true for some access technologies. There's only so much bandwidth available in a wireless cell.


DPI uses
Quoting Mr Odlyzko,

"On the other hand, you do need DPI in either of two situations:
– You want to prevent faster-than-real-time progressive downloads that provide low-cost
alternative to your expensive service.
– You want to control low-bandwidth lucrative services that do not need the special video
streaming features. "


Feel free to call me biased, but neither of these are the standard use cases I see for DPI (and by 'DPI' I refer to DPI and shaping as implemented in traffic shaping devices), and they do strike me as at least a bit tinfoil hat'ish.

For ISP's, I'd say that some of the common uses are:

  • Fairness between users (ensuring that Joe doesn't overly affect his neighbour Bob if they share the same medium)

  • Prioritising VoIP. Yes, really. Sure, if the ISP is a telco and is selling POTS as well, they'd be keeping a pretty damn close eye on how much business they're losing to VoIP. Still, I never ran across a single case where SIP got hampered in any way by a normal commercial ISP. There are countries where VoIP is outright illegal (and this is weird), but that's really a whole other ballgame.

  • Prioritising some other interactive services. DNS, WoW and IM are a few of the ones I see.

  • Shaping down P2P in the access network so it doesn't chew up all of the available bandwidth.

  • Shaping down P2P at peering points and peak hours to keep the bandwidth costs low. Mostly smaller to midsize ISP's that pay the larger fish for bandwidth.

  • Prioritising HTTP based media streams, RTMP and other streaming (Flash video over HTTP - i.e YouTube, BBC iPlayer, various windows media based streaming, Apple iTunes, etc). It makes sense to do so since faster buffering = happier user, if you've got the bandwidth to spare or just prioritise it over BitTorrent transfers.

  • Shaping down HTTP based media streams (Again, Flash video over HTTP) since they do buffer and aren't as sensitive to congestion as other streaming can be) - useful if there's very limited bandwidth available and you don't want to cannibalise 'more' interactive services.

  • Above all, statistics and insight. Network operators are as curious as the next guy and it makes a lot of business sense to know what your users are using.



Bottom line though, I can't see any good scenario where you'd want to hamper video over HTTP because it's video. There might be cases where you might want to punish HTTP downloads in general and some equipment won't see any difference between a ZIP and a FLV - both are just a large HTTP transfer - but these really wouldn't be restricted to video and we're talking very limited bandwith networks.

Wednesday, August 13, 2008

CAcert.org - you got what you paid for

I never really liked the concept of a self-signed SSL certificate. To me, the whole charm of SSL is that you can verify that the host you're talking to is the host you think you're talking to - require the user to disregard the warnings about this and the purpose is defeated. I'm all for the more rabid warnings about this in Firefox 3 that some people like and others dislike.

CACert.org is basically a self-signed Certificate Authority with aspirations of having its root cert included in as much mainstream software as possible. They're included in some Linux distributions, a couple of BSDs - notably OpenBSD - and aspire on being included in Firefox/Mozilla (which would probably make the certs signed by them at least moderately useful for mainstream usage).

Let me say this first - I rather like the idea of a free CA. Philosophically it's a nice idea. The most important thing though, is not that the CA is free - it's that the CA can be trusted.

I've come to the conclusion CAcert.org can not be trusted. Whether or not you share that sentiment is entirely up to you - I'll outline my own reasons below.

Summarizing them a bit: The source code is available (definite plus), they have a formalized structure of sorts, they are fairly transparent and they audit themselves. This is all good things that I - as a presumptive user - would be expecting.

The thing that gets to me, though, is that they manage to miss out on some of the very fundamental things. If you run a CA, you probably don't want to run it on PHP. If you run it on PHP, you probably want to run a recent version and follow best practices. If you don't follow best practices, you really, really don't want to have register_globals enabled.

And if you insist on blowing all of these fairly sane precautions, bad things can happen:
(Basically means 'which email address would you like to use to verify that you control yahoo.jp so you can get a cert for it'.)

The code base is, in a word, horrid. Bad or no documentation, cases of input being sent to the database without quoting and it seems that some older code - containing what looks like vulnerabilities - got some sanity checking code tacked in front of it without really fixing the main issue.

Case in point - not a whole lot of random peeking allowed me to bypass one pretty damn important security feature (in some instances at least - it's not a generic exploit), and I'm not a vulnerability researcher. If it takes a bit of HTTP knowhow and a few Google queries to get signed certificates for domains I'm in no way associated with, I'm guessing someone who's actually into PHP security could do a whole lot more.

It's good that they have processes, but if it comes to the point where we see schoolbook examples of bad programming, the code really shouldn't be in production in the first place.

And this brings us to what is important in a CA - trust. To trust a CA, you should trust the organization maintaining it. In this case, the board hasn't managed to get a good reality check of the state of the 'product' (or if they have, they're ignoring it / keeping mum), the programmers just plain haven't done a good job and the auditor(s) really, really shouldn't have missed something this obvious.

I don't want to see the root cert of this CA in Mozilla. I don't want to see it in Debian. How it managed to slip by OpenBSD's radar is beyond me. I certainly do hope, though, that there will be a decent free and/or open CA some day - hell, it might even be CAcert - but saying that they're good because they're free and they should be included in as many places as possible because of this is just not sound thinking.

At least to me, running the operation like they do means that they probably shouldn't be trusted without a complete overhaul in code and management. It's that bad.

Finally: I'm perfectly aware that there might be commercial CA's out there with pretty bad bugs of their own - but at least webtrust ensures that they didn't do the classical schoolbook examples of bad security and practices. This peace of mind is well worth $15 a year for a cert.

YMMV, of course.

[Edit: The exploit mentioned above was fixed in a very timely manner once pointed out to the CAcert chaps.]

Thursday, August 7, 2008

On Dr. Reeds DPI hearing

Susan Crawford blogged about DPI a few weeks ago*, highlighting something that previously flew below my radar - a testimony by David Reed (MIT) at a US congressional subcommittee. 

While I definitely concede that Dr Reed has a lot more experience in networking than I do, there's a few points I just have to respectfully disagree with. Keep in mind that it's the DPI industry that pays my bills, so I can't really call myself impartial here. FYI.

One thing before we start - this is related to DPI in use by Internet Service Providers, not necessarily DPI as implented by higher education or private enterprises. I believe it's the right of these entities to govern their own traffic, for several reasons (I'll be touching some of that in a later post)

"First, that such technologies are not at all necessary to operating the Internet or to 
profitable operation of an Internet operator..."
"Scaling: To make a faster Internet, all one need do is process the envelopes faster."

This is one of the main tenets. While I do agree with his idea that the success of the net is in part fuelled by the simplicity and standardization of IP (run any app without compatibility issues, forward compatibility, etc), I regularly see networks where it'd be next to impossible to run the network properly without some sort of limitations imposed on the end users. In these cases 'processing the envelopes faster' would require technology not yet in existence - use cases would be rural broadband wireless ISPs in Australia, Wyoming or elsewhere. Another one is mobile data networks. In these cases we have a hard and technical limit. Sure, there'll be technical advances that'll improve the situation - but these ISP's are sometimes the only option for users in rural areas, and it'll be hard to side-step the fact that we're talking about truly shared media here. Service now or maybe theoretically better/fairer service in the future?

"...the Internet Architecture, as defined by the IETF and other 
bodies who oversee the Internet's evolution, neither requires nor allows Internet 
Datagrams to be modified or created by AS's in this manner.
[...]
Thus Deep Packet Inspection goes against the separation of concerns that has been a 
hallmark and generator of the Internet's success."

Here's one I kind of agree with in a way. I think it's quite nasty to go modifying the payload transparently and unbeknowst to the user (Ad insertion รก la Phorm), but this is not an inherent capability or feature in every kit doing DPI out there, and what he's saying is 'DPI does this, hence DPI is evil'. Two cases that are, in my opinion, perfectly valid on technical merit and doesn't make any value assumption what so ever about the data while it's in flight:
  • WiFi where the user isn't allowed to use the Internet before an agreement is accepted or a fee is paid. In this scenario you'd want to block anything that isn't HTTP and direct the HTTP requests to a special login page until the conditions are met. (Yes, you can do this without DPI as well, if you start making assumptions based on ports. Which is daft.)
  • Statistics collection based on protocol and/or application. If you know what's on your network, it's easier to plan for upgrades. It'd also be possible to offer services better tailored to a given set of users. Merely knowing the throughput on a per-user basis does not give any hints about how jitter or latency sensitive the apps are. Sure, the stats from a DPI stat unit gives a hint at best, but a hint is way more than blind guessing (Dr Reed actually touches this later on, saying "Finally, Deep Packet Inspection technologies are used for monitoring the performance and health of Internet operations. [...] Such tools can be quite helpful in finding faults within the network and predicting areas of growth that support AS's"
This is without even touching the cases where it'd - in my opinion - make sense to actually shape data, be it for lowering cost or for making operating a network feasible in the first place.

"Deep Packet Inspection systems work by deliberately interfering with end-to-end communications, but by definition attempt to deceive the endpoint systems about what the original Internet Datagrams contain."

In one word: No. I agree that DPI is a very much battered term (in fact, see one of my previous posts regarding that), but DPI is not inherently equal to modifying data in-flight. Traffic shaping gear, for instance, can (usually) operate in a manner where a given protocol will be given a smaller pipe to operate in - i.e less bandwidth available per user. It's quite possible to debate whether that's kosher or not, but it does not trigger the risks that Mr Reed highlight.

Summarizing a bit: I agree that's it's close to moronic to randomly go modify the contents of packets in-flight, but what Dr Reed is describing is a mere subset of DPI gear. Saying 'DPI is evil' is overly broad.


* Bear with me - since shortpacket.org is brand new, there will be comments on stuff that's not hot off the presses as long as it's interesting.

ZDnet on DPI

Michael Kassner at ZDnet wrote a pretty good introductory text about what DPI will mean to end users, from an end user perspective. Most of it is pretty much spot on and it's a good read. Some notes about the parts that aren't:

"Enforcement of service-level agreements: ISPs can use DPI to ensure that their acceptable-use policy is enforced. For example, DPI can locate illegal content or abnormal bandwidth usage."

"DRM enforcement: DPI has the ability to filter traffic to remove copyrighted material. There's immense pressure from the music and film industries to make ISPs responsible for curtailing illegal distribution of copyrighted material."

DPI gear is, by itself, actually pretty bad at discoverying illegal content. For decent gear it's somewhat easy to spot encrypted P2P, it's easy to spot Freenet, I2P, tor, etc - but it's not very easy to ascertain that the content being transferred is illegal. For the P2P traffic, you'd gain more information by asking the P2P network itself (it's still a mammoth task though). Freenet and I2P are designed to make this pretty hard, since - unlike normal file-sharing P2P networks - there's no authorative list of which IP is offering what data.

Granted, it's perfectly possible to say that P2P should be blocked, period, and this is something stream identification + traffic filtering can do for you. Quite a few places do this, especially higher education sites.

Spotting 'abnormal bandwidth usage' doesn't necessarily require DPI. You could spot this on a per-subscriber basis without any DPI gear what so ever. Traffic shaping equipment usually makes this a trivial exercise and provide stream identification at the same time, though.

Wednesday, August 6, 2008

DPI - what's in a name?

One my my favourite pet peeves regarding the public perception of this industry is the way the term Deep Packet Inspection is used to describe any sort of device that processes layer seven, no matter what it does. Consider the following three implementations:
  • A WAN optimization appliance with protocol awareness / on-the-fly data compression.
  • A protocol aware bandwidth limiting device, a.k.a traffic shaper.
  • Ad insertion in random web pages. Yes, this is plain evil.
I've seen all of those referred to as DPI, despite them having almost nothing in common when it comes to what task they actually perform.

Let's paraphrase this using another type of traffic - automotive. You wouldn't refer to the concepts of toll roads, bus lanes and congestion charges as all being Road Traffic Management, right? Yet this is pretty much what's happening with DPI.

The industry itself isn't helping much either, as new prefixes and suffixes are invented by some vendors just to further add to the confusion. We've got Qosmos and their ixDPI, more than one vendor is pushing or has been pushing DFI, etc. In a way, I can understand this - since world + dog does DPI, I'm guessing they're trying to differentiate 'our DPI' from 'their DPI'. Thankfully, I'm not a vendor and not trying to sell anything. 

Bottom line: Be careful about thinking of DPI gear as one single class of gear. The technical implementation depends on what you need to do with the traffic, the positioning in the network depends on what you need to do with the traffic and one device really isn't anything like the next. Even if you can do horribly nasty stuff with some devices, the concept all by itself isn't necessarily evil.

Someone else who blogged about this and more is Travis Dawson (who works for Sprint but opines all on his own). You can find his take on it from a service prodier perspective here

init()

Another year, another blog.

I'm a networking professional (sounds fancy, doesn't it?) in the field of Deep Packet Inspection - DPI for short. Reading the news, it seems there are enough misconceptions regarding DPI for it to be interesting to blog about it from the perspective of an industry insider.

If that means 'insight', 'rambling' or 'missive from the enemy' is entirely up to you, of course. For me, this is simply a place to comment. Slashdot has some interesting discussions every once in a while on the DPI topic, but the signal to noise ratio is absolutely dreadful and way too many posters have no idea what they're talking about. Harsh but true.

Any opinions stated here are my opinions, not those of my employer - a corporate entity that shall remain unnamed. Keep in mind that they are opinions - my point of view, not the objective truth. I'll try to back them up by facts as often as possible, but be advised that it will not always be possible to do so for one reason or the other.

If you rather want somewhat well written corporate-entity-approved texts, dpacket.org is probably the place you're looking for (just keep in mind that most people posting there are stakeholders in the industry - i.e only slightly more impartial than Bill O'Reilly). I'll make sure to highlight and comment any relevant news from that blog.

Addition & retrospect: Shortly past the inception of this blog I got asked to do some more writing on the topic of DPI on another blog. As that would be in a more official capacity than what I do here - i.e signed with the company name - it didn't really make sense to keep my place of employment a pseudo-secret anymore. So in the interest of disclosure, I work for Procera Networks. If I tell you to buy a PacketLogic on this blog, please take it with a grain of salt.