Duke wrote:The big issue here is how the ISP works out how to classify content. To give the two obvious examples, how does an ISP differentiate between me downloading the latest episode of Pioneer One using BitTorrent, or me downloading an episode of Doctor Who?
For me to take the net neutrality position for a change (gasp) I don't think the file you actually download should be the concern of the ISP. There isn't any way (that I know) to effectively control content being downloaded from the internet, other than via whitelisting all known legal software and blacklisting the rest. This is what various enterprise level domain policy and software controls allow - and it's a good idea, rather than trying to stop virus X, Y and Z, just allow adobe pdf reader, notepad++, firefox etc etc). I can't imagine such a service ever being economical for an ISP to run. Though I imagine ISPs and 3rd parties offering more and more managed desktop solutions (where you remote to your desktop to get your internet service) which are more locked down.
Therefore the only way to control what gets downloaded it via the websites that serve the content, be the content .jpg or .torrent. That's why I say this is the only place controls can take place.
Similarly, how does an ISP differentiate between a website hosting child abuse images and one hosting holiday snaps?
To extend my idea a little further then, I imaging a fairly complex system of checks and balances. First of all any ISP content services would have to have disclaimers 'makes no guarantee as to the accuracy and verifiability of the service', like with say, Youtube content disclaimers, e.g. we do our best to enforce policies, but if something slips through, you can report it but don't sue us.
Anyhow, content classification feeds could come from the following sources:
* Search engines
* Other websites performing classification tasks
* Phishing feeds
* User submitted tagging via 3rd parties
* Hopefully not the government!
* ISP aggregated suggested feeds
To be honest, I don't know off the top of my head the details of how maliciously misclassified data would be handled. I imagine a wikipedia like process, something that doesn't get it right all the time, but it far more useful than problematic.
From my understanding of the way the IWF currently works, the public (or police, or whoever) report sites to them and then a person there has to check each site, guess whether or not the content could be found illegal (and within their remit) and then add the content to the list. While this may work for a few hundred online locations, trying to do this for the entire Internet is impossible. The only way around it would be to use white-lists and some form of compulsory "tagging" of sites, but I really don't think we want to go there.
Correct, lots of parties would have to start offering comprehensive services before this idea is at all viable.
An issue here is that for a commercial model to work, people have to have a real, free and open choice. In this case, there is no real choice, as that would require a competitive ISP market, with clear guidelines on what content might be blocked. There isn't a free (as in freedom) choice as there are all sorts of social pressures inherent in having to ask your ISP for access to smuttyness or pirate content.
Agreed we're not there yet and there is not competitive market.
This is clear from the speculative invoicing schemes where a significant part of the plan was to use material that an individual would not want to risk being publicly associated with, and I think this would apply to having to go to an ISP to ask for access. In terms of openness, I guess that will depend on how the ISP implements the service; how easy it is to find out there's blocking going on, how easy it is to change etc..
The fact people can choose to watch adult channels on their cable tv packages hasn't stopped that area being lucrative, unless you're Jacqui Smith and have no idea about this sort of thing of course
A semi-mature content filtering system I imagine could only be effectively managed via a self-service control panel in any case. You have a point, but I don't think it could be so much of an issue as you suggest.
In my opinion, people should not need to be protected against accidentally breaking the law. Until the 19th century (iirc) you couldn't accidentally break the law as intent was required for illegal activity. While strict liability offences and infringements have their place, I feel that they have expanded far too much over the last 40 or 50 years. It shouldn't be possible for me to be arrested etc. for accidentally stumbling across illegal material online (and in the UK, afaik, it isn't); while ignorance of the law shouldn't be a defence, ignorance of the facts (i.e. a lack of intention) should be in nearly all cases.
Within the borders of the UK, fair enough that's making sense. But if I hand over my credit card details to some dodgy Russian mp3 selling site for example and get defrauded, identity stolen and all sorts, that's a serious consumer protection issue, going back to phishing, a serious issue on the internet.
Also, I have probably broken the law on various occasions by watching live streaming events on the BBC without owning a TV license.
What I'm getting at, and is quite core to PPUK policy, it's very hard to use the internet and know your rights, know what is legal, illegal but no ones cares, or plain illegal. I (and other pirates I hope) care about this sort of thing and would seek some clarification. For moderate use and higher use internet users it's simply impractical to obey every EULA, follow all allowed format shifting limitations, download only content you're 100% sure is legally in the public domain. Rather than racking up liability after liability (did my ISP log that? did that software call home?) we need to be in a position where one can control ones liability, and perhaps accept it if you choose to.
In any case, the copyright lobby isn't pushing for web-blocking to stop people accidentally visiting tPB et al., they're doing it to stop people willingly doing so - they don't want people accessing music etc. (for free or paid) (legally or otherwise) from any site not paying them money.
To be honest, that doesn't worry me too much, as I think it'd be so counter productive. I would (as would others) most likely keep a list of P2P sites blocked in the UK and go about publicising the poor transparency in the process and issuing guides how to circumvent the blocks, to make the list could be a badge of pride that a p2p site has 'made it'.
my browser has some form of web-blocking stuffs; every so often I try to go to a site and get a warning saying it has been reported as an attack site or something similar. I quite like that. The difference between that and ISP-level web-blocking (as currently in action) is that the browser window has a button to ignore the warning and carry on to the site. If I trust the site (or trust my various defences etc.) I can visit it anyway. What doesn't happen is I get a page telling me I have to get my account holder to call up the ISP, or even just a 404 page.
I imagine this being needed at the ISP level however, if you're securing your internet connect against say your kids clicking on such a site and ignoring the error.
Child abuse and copyright infringement are not technical problems. They are clearly social/moral issues; i.e. our society says that they are unacceptable (well, in theory; in the latter case society is divided). In my opinion, a technical measure (blocking) is not an appropriate way of tackling either of them.
Well you know I'm against the proposed blocking implementation in any case.
Whereas I think that we should oppose [IWF scope creep], with a compromise option of accepting very high levels of transparency. Of course, no one is talking about any new powers - in fact, existing powers don't exist anyway (afaik the IWF has no legal authority to do what they do) and that is another problem. There's no discussion about new powers, as this is all being done by the ISPs behind closed doors. ISPs could start blocking access to huge chunks of the web tomorrow and there is nothing (legally, politically or commercially) anyone could do about it.
I disagree. If UK censorship became more widespread, there would be more mistakes. More mistakes would lead to more people publicising the mistakes and circumventing the blocks routines, undermining the system. The IWF works while it doesn't piss anyone off and doesn't make mistakes.
I think we can and should be opposing this, we should be trying to stop (and reverse) the scope creep of the IWF and campaign to return them to their reporting function alone (a function I fully support).
I think I agree, however I believe there needs to be an alternative user-controlled process to compliment the rest of the stuff.