The need for physical computing

Coding is one of the big buzzwords in the edusphere nowadays. Many campuses are involved in the Hour of Code, many more have moved beyond the Hour of Code and have established coding clubs. Some are using curriculum from code.org, others are going through Google’s cs-first curriculum, while others may use Tynker, Code Combat, or even go rogue and use their own home grown curriculum. Personally, I like the organic idea of building your own curriculum, though the resources from all the above listed sites are superb, and building from a foundation, especially a solid one is really a good idea.

So when people are doing all of this coding, what is it they are learning? At the foundation, people are learning logic, learning to create and solve algorithms (problems really), and going through the problem solving process to find a solution to the task at hand. Many of the tools, such as Tynker, Scratch, Snap, and Hopscotch have users manipulating sprites as a part of the program to give the programmer a more direct connection to what they are programming. I would argue that it is incredibly useful to build with blocks and then see how the sprite is affected. However programming at this level is only the beginning. The next step is physical computing.

So, physical computing. What is it and how do I get start doing it? Physical computing is basically merging hardware and software together in order perform physical functions that interact with the non-digital (aka analog) world. At the heart of physical computing are tools such as Spheros, Ozobots, Edison bots, the Lego WeDo, the EV3, Arduino, Raspberry Pi, and a new kid on the block, the Microbit. These are tools that are programmable through and interface but then sensors, motors, servos, and other electronics are added to help perform a variety of functions. What are some of the possibilities with physical computing:

programming a robot to move around independently
a water temperature sensor that registers an alarm at specific temperatures
weather balloons
Christmas light shows
A catapult (why not, it’s just cool)

Merging of the digital with the physical environment is a natural step in the programming process. When learning code, we deconstruct problems with the problem solving process for computer programming.

We construct the idea in Pseudocode, basically a stripped down form of the code that helps us to understand the logic and process of what is going on but without using the full code. After going through pseudo-coding, we begin the coding process and then the problem solving process begins. We naturally will have problems in the code and then it is up to us to debug and ‘make it work.’

The idea is the same with working with the physical environment. Let’s take a robot smart car:

Smart Car

This has motors, a microcontroller, and a sensor. The goal is to make an independent smart car that will move on the ground and when it confronts an obstacle, it will turn around. The microcontroller is programmed (using Makecode for Microbit, iForge for Arduino) to power up the motors. An Ultrasonic sensor is connected to the microcontroller and the pins on the microcontroller are set so the ultrasonic sensor is on and can ‘detect’ obstacles.

Naturally in this process a lot of ‘bugs’ happened in the code and I had to work out a lot of errors. I needed to get the motors to run independently, to turn correctly, to go the right speed. When I added the sensor, I had to control distance and then have the motors turn the proper length and time so that the car could move out of the way of the obstacle. This was definitely a process and took quite a bit of trial and error.

In a future blog post I hope to break down the Microbit or I may just have a student do a guest post and share their experiences with the Microbit and what it can do for learning.

Till next time. Keep the thinking organic!