Javascript, CSS galore, and a Little News

Who can ever have enough CSS and Javascript? Today I’ve got some libraries that you might find interesting.

Bootstrap – Quicker, Cleaner UI:

Everyone knows how hard it is to get your initial CSS template set up for a a site and the difficultly is only compounded by trying to ensure that your design is clean, that you have a solid grid system, that the CSS is cross browser compatible, and it looks appealing. One designer over at Twitter (@mdo) realized this and he has released Bootstrap which is a great starting point for the design of any site. It comes with it’s own built in grid system, it’s cross browser compatible, and probably most important it looks stunning. If you’re in need of a good starting point for your next project why not take a look at it.

Turn.js – Page turning in Javascript:

A coder in Cincinnati named Emmanuel Garcia has made an pretty nice API for replicating the page turning effects from iBooks in HTML5 and javascript. It’s cross browser compliant and works on most mobile devices, the only exception I’ve found so far is the Nook’s browser. Here’s a link to his project Turn.js it’s also available on Github.

Impress.js – Simple Presentations with Javascript:

While it may not be a Powerpoint killer just yet you can’t help but feel you’re looking at the future when you look a presentation done with Impress.js. Everything from the transitions to the 3D effects are very well done. If you can get your browser into full screen it may be a viable alternative to Powerpoint it will even allow you to embed other sites directly in the slides via an iFrame for a real time demo of a feature or flaw. It may be limited in uses right now but in the future I can see it being big.

CoNNECT in the news:

Now for a bit of news. As many of you may or may no know I’m currently working at Oak Ridge National Labs(ORNL) on a project called CoNNECT (Citizen Engagement for Energy Efficient Communities). This is an application that uses javascript to help a person visualize their power usage and compare their usage with their peers, with the hope that this will show the power company customers how efficient they could be by showing them how efficient other people are. This week the project was featured on the Knoxville news-station WATE. Here’s the link to their report: WATE Report. Featured in the video is Dr. Budhendra the GIS division head and the researcher I currently work under Dr. Omitaomu.

Top 5 links of the past week:

Every so often I run out of original content to write or show this is one of those times so I’d like to direct you to some projects or people who I think are doing really amazing work.

For the techie:

DuoLingo:

First I’d like to show off DuoLingo the brain child of Luis Von Ahn from Carnegie Mellon University. DuoLingo is a free services that seeks to help you learn a new language from scratch instead of buying software such as Rosetta Stone. The way they do this is they give you sentences to translate to and from one language to another. The really amazing part is that you’re doing actual work: the material that you translating while learning is actual content from somewhere on the web that is getting translated by you and several other users. Check out his presentation at TedX CMU.

You can sign up for the beta at DuoLingo

Rooting of the Playbook:

Looks like BlackBerry has finally met it’s match hackers xpvqus, neuralgic, and cmwdotme have gained root access to the RIM’s Playbook. This is particularly interesting to me, RIM’s hardware is pretty nice but their operating system seems clunky and unwieldy. Maybe just maybe someone will take it upon themselves to port a newer Android OS to this device.

TheVerge article on the rooting of RIM’s Playbook

Siri and X10:

I’m not sure why I’m recommending this link. On one hand it’s really interesting to see what people are doing with the Siri proxy, but on the other hand it’s terribly entertaining. This gentleman appears to have linked Siri with his X10 home automation system. This allows him to give Siri voice commands to control the things that he has linked to his X10; in this case his fireplace and his lights.

For the artist:

The 45 most powerful photos of 2011:

Enough said. The two that are the most moving to me are #25 and #30 they will always be stuck in my mind.

Most Powerful Images of 2011

Society6

I recently discovered this website. It’s basically an online storefront for graphic designers but there’s some really inspired work on there it’s definitely worth a look and maybe even purchasing.

Society 6

Why you need to keep your variables the same type

When I first started programming I remember seeing a table with the amount of memory that each variable would take up for instance a byte would take up a byte, a short would take up two bytes, and an int would take up four bytes. Now any enthusiastic programmer (I did this so it may just be me) would immediately think I’m going to make my program take up as few bytes of memory as possible which is a noble idea. They will go through their program and determine the range of each variable and then figure out at most how many bytes they will need. However, this approach may be quite flawed for the following reason: once your compiler converts your high level language that you’re using into machine code you end up using more processor cycles to achieve the result. This is due to the nature of numbers inside of memory and the processor. Inside a processor you cannot add two variables if they differ in size so you must convert one to the size of the other by loading it into the A(al,ax,eax for 1byte,2byte, and 4byte respectively) register of the appropriate size and converting it (2 cycles where it should take one cycle)

Take the following example for instance: [Written in Java]
byte x = 6;
short y = 32;
int z= 128;

int v = x+y+z;

the same process in x86 assembly would look like this:
.data
x byte 6
y word 32
z dword 128
v dword ?
.code
mov al,x
cbw
add ax,y
cwde
add eax,z
mov v,ax

Now lets take a look at the processor optimized solution: [Java]
int x = 6;
int y = 32;
int z= 128;

int v = x+y+z;

And in x86 assembly:
.data
x dword 6
y dword 32
z dword 128
v dword ?
.code
mov eax,x
add eax,y
add eax,z
mov v,ax

You see in the processor optimized solution there are only four total operations to carry out in the memory optimized solution though there are six total operations. It might not seem like much but imagine you have to do this operation ten thousand times. The processor optimized solution takes only 60% of the operations that the memory optimized solution takes and it only uses five more bytes of memory. That’s a huge time savings no matter how you look at it.

Now I’m not saying the processor optimized solution is always the best solution. Instead I’m saying that in a system where you don’t have to worry too much about memory management that a processor optimized solution my make your program run faster. If you’re in a system with limited memory though it’s probably faster to just go ahead and optimize it for memory since the conversion operations are pretty fast anyways. It’s worth it to note I’m just a student so if I’ve misunderstood some concept please correct me I’m just trying to learn all I can.