Skip to main content
Pembroke

Pembroke Entrepreneurs: Newslist

First year engineer Abhishek Shenoy (2018) was a finalist in this year's Parmee prize. His concept, Newslist, is based on an API (Application programming interface) he wrote when Google removed their API from use, and Abhishek's news reading app came to a halt. He wrote the API to solve this problem, and impressed the Parmee prize judges with this practical, problem-solving approach.

Where did the idea come from?

Back in year 12 I was learning to programme and wanted to make an app for my needs, but also to publish on the app store. I wanted a news app that had news from multiple providers that would show me just what I wanted to see rather than content I didn’t care about. So if I followed sports and it showed me football, but I didn’t like football, I could stop it doing that. I built the app using google’s API and my own sorting algorithm. But when google stopped their API it meant I encountered this wall that stopped me going any further and meant all my work before had been thrown out. I had to think how to get it working again. I thought of using other APIs but I would’ve had to change the entire app. So I created my own, scraping openly available news online, which would save that news to a database and then take it from there to the app.  I could simply transfer the end point to the app and get it working again.

Does it work at the moment?

It’s fully functioning. The app itself works and I use it myself, but I never got it onto the app store. You need a good server to store the data and new data has to be scraped continuously, which is very difficult. I’d have to have my own server or a cloud server paid for from someone else, and the more you store the more expensive it gets. It’s functioning, but I’ve never got it out in a big way.

How long did it take you to write the API?

The API took longer than making the app. You have three different parts. One is the scraper, which scrapes the data and saves it to a database. Two, you have bots running through the database to extract information, summarise, and attach keywords. The third one is the endpoint, which is, if I say ‘give me news for sports’ it will search the database and give me news on sport sorted by certain criteria. Scraping was the hardest because not all data is structured the same way, and you want to make sure that it is. You can remove inconsistencies and clean it so you only keep the important bits. Scraping through the online data and making it usable, that’s the hard bit.

Will you keep pushing forward with this?

I always have multiple projects running, mostly for personal development rather than to make them into something big. If somebody felt there was interest in it I would develop it further, but for now it’s a dormant project.

I also work on reinforcement learning, so I learnt how to do that a year or two ago. I’m trying to apply that knowledge in scientific fields.

Why did you pick engineering as your undergraduate degree?

I wanted to learn computing, but obviously I knew the market was inundated. Clearly there’s lots of knowledge there but there’s engineers who can’t programme, and computer scientists who can’t do the engineering, so I wanted to be able to do both. If I wanted to programme how to build a bridge in the most optimal way, I’d have the engineering knowledge and the programming skills, for example. Being able to do that from scratch is really useful.

Latest Tweets