I haven’t been able to work much for the past week due to illness and other responsibilities, but I managed to put in a nice 6hr chunk of time in yesterday and today. As a result, I’ve hooked up live reddit data from r/news, established basic story retrieval with the Node API, modified the front-end to accept the data, and created a rudimentary setup for search with elastic search. A nice uninterrupted 10p-4am can be so much better than several short bursts in the daytime sometimes!

I estimate this chunk completed maybe 10% of the front-end and 20% of the web server. I’ll do the alph ETA calculation later. Here are some shots with live datamelona_live_1

The new stody detail page with populated data. I need to clean up the keyword extraction and images.

melona_live_3

The front page. The reddit data mining runs periodically so there can be live updates to the site. (server push coming soon)

 

melona_live_4

the elastic search powering the search page was remarkably easier than my previous approach, building from scratch with Lucene.

Backend Prep

I’ve started building the Node back-end and already I’m beginning to see why others warn against the “callback hell.” I’ve picked up bluebird for Promises, which will helpfully reduce the callback difficulty. Mocha and Chai js are looking good as options for testing the REST API once I have the requirements stabilized, and may offer me more opportunity for test-driven development.

I’ve currently worked 19hrs on this project, with an estimated 21% completion of the front-end. That sums for a 90hr front-end build time with 71hrs to go (considering the 56hr estimate from the previous post). I’ve been working on other projects, so I haven’t been able to put enough time to complete my 2-week stretch deadline.

However, I’ve set up the Express and Flask API’s and am currently using them to retrieve placeholder data for the story pages. Here’s what the story page looks like so far:

Capture-5-3

I’m using Newspaper to generate all of the information so far, with its built-in NLP capabilities to extract keywords and summarize the text. There was never really much of a design to begin with so I’m thinking about how to display the statistics and sentiment analysis information when I get to it.

The summary also won’t be a huge block of text in the final version, but a collection of helpful snippets from multiple news sources. On the right sidebar I’ve added an area for “Related,” which may imply some content recommendation in the future. I have no click data, so recommendation will most likely be NLP-similary based (or index news sites’ recommendations maybe?).

Basic Structure and Styling

I’m trying out a different approach with this project. Most of the time I build the back-end first, with functional API’s and real working data before beginning the UI. As a result, changes in the requirements of the UI sometimes make it necessary to modify the back-end for overlooked features. As a result, I want to try to build a semi-static UI with dummy data, then see what data requirements can minimally fulfill the UI.

I’ve been a little busy these past few days with other projects, so I’ve only been able to log 4.5 hours so far. I’ve marked an approximate 8% completion for the minimal front-end code so far, which gives me 56.25hr remaining for the front-end. This estimate will become more accurate over time, but seeing that this placeholder UI is essentially the wireframe and requirements list for the project, these numbers might be useless for now!

Here’s how it looks so far:

Capture-4-28

I’m using Unsplash for the placeholder images, which is nice since they’re random (refreshing!). It looks like Product Hunt, well, because my main design inspiration for now is Product Hunt. I’ll worry about branding and individuality later after I get to the fun NLP parts of the project.

I’ve also set up a basic Express server, which I’ll populate with placeholder data as the outputs. That way I can populate the placeholders with the REST API and have a halfway functional front-end by the end of this initial phase.

I’m trying out a documentation schedule for a new project I’m starting called NewsMelon. It will be a combination of information retrieval and text summarization. That is, NewsMelon will find multiple news articles about a story and provide a summary of that information. The goal will be to build an MVP within the next  two weeks, using a Node based webserver with Flask for the NLP and React/Redux for the front-end. I haven’t built anything major with Node.js yet, so I’m hoping I’ll get acquainted with some useful packages and Express.js.

The tentative tech stack is listed below:

Back-End:

  • Node.js running Express.js as the static file server / CRUD REST API
  • MongoDB as the main DB ( to experiment with a JSON based data storage system )
  • Python Flask as an endpoint to access NLP functions
  • NLP: NLTK/Goose Extractor/Newspaper ( to extract article data from news article URLs )
  • LevelDB? via LevelUP for full-text-search on the Node webserver

Front-End:

  • React: UI
  • Material-UI: Basic Material Design style guide with React components ( for dev speed )
  • Redux: front-end state management ( may not use if the application is simple enough )
  • SASS: for stylin’

+ webpack for building/npm for package management

With an ambitious goal of 2 weeks ( maybe 6 weeks to be realistic ), I’ll try to post major updates for insights I gain and issues I come across every few days.

feature_image

Link to Game

It’s been a long time, and many things have changed. Of those things, I made my first Android game. I’ve been doing a lot of Android development for the mobile app support for my hardware product, Moxie, and this was a side project I took up on a whim. Yes, it was inspired by the classic game by XGen Studios, and I wanted to bring a richer experience to the mobile market. Fishy has a special place in my younger years, where flash gaming was bigger and less plagued by in-app purchases.

The game play follows the natural law: eat what’s smaller than you, avoid what can eat you. Since this was my first game, I went a long way in figuring out how game graphics work and how to make the illusion an natural environment. I learned a few things on the way, and I might make a few tutorials in the future to save anyone else with similar goals a few hours of time. But even though it felt like a long time, I finished the whole project in under a week (that’s not to say I’m not going to add a few improvements).

The game engine is rudimentary, but at least it was implemented from the ground up. A rendering thread attempts to update and draw on a canvas on the screen in increments of around 30 ms while the UI thread runs in the background, with sporadic calls to Java’s built in audio players and whatnot. The game world physics are interpolated in these 30 ms ticks and each object’s velocity and position are calculated. The user input is also handled during these ticks, creating the illusion of continuous movement even though time between frames might be a bit choppy. An object pool holds and spawns instances of each enemy type according to the desired difficulty and settings through an enemy factory, and these objects are created and killed in place to prevent memory leaks.

Add a few level mechanics and fish-growing behavior and you have the basic workings of my first game. I hope people like it. A full report on the game to come soon.

 

I thought 3D scanning would be extremely painful to do without fancy equipment or complicated setups. However, Autodesk has a free (at least free for students) service called 123D Catch which allows you to upload pictures and have them processed as a 3D model on the cloud. Simply amazing. I just learned of this software today, and had a low quality test with my quadrotor on my desk:

Considering it was done with just 20 unfocused, low resolution photos that didn’t cover the  entire span of the model, this 3D scan is amazing. I wish I had access to the source code.

Many applications for this. It’s a 3D printer’s dream to be able to record something on the field and have a replica of it printed in a few hours without ever having to touch or measure the reference object. That could be done by scouting with a quadrotor or drone, or just by hand or in a studio. The drone 3D model capture idea sounds great though. I’ll try to take some aerial photos and try to make a 3D model out of that later with the big quad. Another cool idea is taking underwater video and stitching that. More things to come.