Do we need a Public Web Index? SISTRIX
Do we need a Public Web Index - SISTRIX Login Free trialSISTRIX BlogFree ToolsAsk SISTRIXTutorialsWorkshopsAcademy Home / Blog / Do we need a Public Web Index
Hamburg University of Applied Sciences) and one of Europe’s “search engine professors” has made the suggestion (PDF version) to operate the individual parts of a search engine separately: A proposal for building an index of the Web that separates the infrastructure part of the search engine – the index – from the services part that will form the basis for myriad search engines and other services utilising Web data on top of a public infrastructure open to everyone.
https://arxiv.org/abs/1903.03846 The core problem, he says, is that when designing a ranking algorithm, the world view of the creators always comes into play – and thus truly neutral and unbiased results are not given. Due to the market share of over 90% by Google in Europe, this is a problem. The solution: A separation of crawling / index as well as the frontend / algorithm of search engines. On the basis of a public infrastructure, a whole range of different search providers should emerge and enable much-needed diversity. Could this work? I’m rather sceptical for the following reasons.
Do we need a Public Web Index
Johannes Beus (Author) 23.04.2019Barely a week that goes by where Google’s use of its market strength isn’t criticised. Dirk Lewandowski, Professor at HAW Hamburg, has now proposed a public web index as an alternative to Google. Could this work?ContentsContentsGoogle owns the whole 'stack.'Search in the era of Machine Learning Strength in Numbers Change is DifficultSummary Google Search is currently in the news more often than it should be. In the last week, VG Media, a media-usage collection agency in Germany, called for 1.24 billion Euros from Google, the Android operating system is implementing the European Commission advice to open the operating system to new search engines and Idealo is suing for 500 million Euros. Against this background, Dirk Lewandowski, Professor for Information Research & Information Retrieval at HAW Hamburg (Hamburg University of Applied Sciences) and one of Europe’s “search engine professors” has made the suggestion (PDF version) to operate the individual parts of a search engine separately: A proposal for building an index of the Web that separates the infrastructure part of the search engine – the index – from the services part that will form the basis for myriad search engines and other services utilising Web data on top of a public infrastructure open to everyone.
https://arxiv.org/abs/1903.03846 The core problem, he says, is that when designing a ranking algorithm, the world view of the creators always comes into play – and thus truly neutral and unbiased results are not given. Due to the market share of over 90% by Google in Europe, this is a problem. The solution: A separation of crawling / index as well as the frontend / algorithm of search engines. On the basis of a public infrastructure, a whole range of different search providers should emerge and enable much-needed diversity. Could this work? I’m rather sceptical for the following reasons.