Saturday, December 14, 2013

Scientific Computing: Computational Physics

After years of gaming someone showed me a spreadsheet and my life was changed. I discovered for the the first time that computers can be used for much more than playing a video game. One of the first uses of major computational computing was the calculation of real world variables and the dropping location for a bomb. Scientists found that they could use complex physics formulas and instantly calculate those formulas. This opened up computing to many new and exciting fields. Not the least of which is computing physics.


The first electronic digital computer was built in a basement of the physics department at Iowa State University.  The computer was invented by a man named Professor John Atanasoff.  In fact most major jumps in computing were made by physicists.  It just so happened that these physicists were simply experimenting with ideas and created a computer.

Professor Charles Bennet, IBM Fellow makes a great argument when he says that for a long time physics separated from computer science and we stopped making great computational advances.  We thought that we had finally found the answer and stopped looking.  But recently physicists have reentered the computer science community to work on quantum computers.

In a normal computer every bit has two modes, on and off.  This is a simple way of describing binary.    In a quantum computer each bit would be able to hold many modes, not just two.  A company known as D-Wave created a quantum computer where each bit can hold two states simultaneously.  This allows for a total of four modes, (0,0), (0,1), (1,0) and (1,1).  Because of this, each bit can provide twice the information of an ordinary bit while computing at the same speeds maybe faster.


Physics and computers have a long relationship and more recently this relation will begin to grow to what it once was.  Maybe in the near future we will see some more great and wondrous advances in computing that will provide answers to bigger questions.  Or perhaps, take over the world.

Computer Graphics

Computer Graphics is a term to describe the drawing of objects on a display.  The original task was to change tiny little squares from on to off and back.  The display would be just black and white (no grey scale) and these tiny boxes, known as pixels, would turn on for white and off for black.  This allowed for text, pong and the beginning of the personal computer evolution.  Soon after, inventors discovered they could shade the pixels to be brighter or darker, ultimately creating grey scale.


Today, computer graphics have jumped leaps in order to build great games or movies.  Games have provided many advances in light shading and real world physics.  Calculations are made to determine what boxes get changed to what colors.  In a video game, the user can make decisions that can effect the the angle or lighting of a particular object.  So the computer has to make split-second decisions on what colors the pixels need to be.  These pixels are put together in an attempt to make things appear real world.



Computer graphics also has many real world applications.  The medical field has made great advances as a result of computer animations.  Objects can be created 3d representations on a computer and the doctors and scientists can get closer and better look.


Computer graphics will continue to make great advances to make visual representations more and more realistic.  We have come a long way from black and white boxes and a screen to the high quality and fast visual representations.  It will be exciting to see what comes next.

Tuesday, December 10, 2013

Communications and Security



In World War 2 it became apparent that the Japanese were able to crack the communication codes used by the military.  So the US Marines came with a great idea.  They could use communication codes but also use Navajo Indians to write the message in their original language.  Even if the Japanese cracked the code they would not be able to understand Navajo.  These troops soon became to be known as "Code Talkers."

Today, we still encounter the problem of bad people trying to collect our information.  Imagine if someone could see everything you were doing on your banks website.  They would be able to see your password, account numbers or billing information.  Imagine the damage that could be done.  So computer scientists teamed up with cryptographers to create a communication security that would protect users from malicious attacks.


The most common type of encryption is a symmetric-key encryption.  This uses a certain type of password that is a 
specific size that both the encoder and the decoder need in order to decrypt the message.  Anyone who obtains the information without the key will just see gibberish.  These keys have been anywhere from just a few bits to infinite in size.  The larger the key the more impossible the code is to break but also the longer the message will take to decode.



Some companies and even the militaries around the world have taken this encryption to the next level buy using physical objects.  Things like fingerprints or even keys can be used.  A version used by the US military has a physical key combined with a clock element.  The computers on both end must have their clocks synced.  Then both users must enter and turn the key with in seconds of each other in order for the computers to produce the same temporary encryption key.  The message is sent and received with no issue.  This makes the key difference every time while also reducing risks that can be caused by lost or stolen physical keys.

Your information online is very important.  It is our responsibility to ensure it is dealt with in a safe and secure fashion.  Please ensure everything you do is secure before transferring important information.


Artificial Intelligence




When ever the topic of artificial intelligence comes up the first thing that comes to mind is "Skynet."  In the movie "Terminator", artificial intelligence has become to advanced that it decided the world would be better off without people.  In a similar fashion, the short story that later became a movie "iRobot" displays how an artificial intelligence eventually came to conclusions that were dangerous for mankind.

Websters' dictionary defines artificial intelligence as "the power of a machine to copy intelligent human behavior."  I believe it is slightly more than that.  I would like to define it as the "the power of a machine to make decisions that are not pre-defined."

In Websters' definition we can say that a program that nearly asks a question and provides a limited number of responses in artificial intelligence.  For example, the site http://en.akinator.com is a 20 questions game where the makers have built an elaborate database that can narrow almost anything down in 20 questions.  But this does not display the computers ability to think.  All of the decisions are pre-defined.



Siri is a much better example of artificial intelligence.  Siri can't take over the world or exterminate humans in anyway, but the more people use Siri the better Siri can provide answers and understand people.  The developers at Apple designed her to be able to increase her intelligence and become more useful on her own.


I don't believe we will end up with some kind of Skynet and artificial intelligence can be very useful and helpful.  It provides us with many tools and ways to increase our knowledge.  Maybe someday we will see a significant improvement in the medical industry as a result of artificial intelligence.