Ra Development
Ra Development
Ra Development
This PDF file contains pages extracted from Rapid Android Development, published by the Pragmatic Bookshelf. For more information or to purchase a paperback or PDF copy, please visit http://www.pragprog.com. Note: This extract contains some colored text (particularly in code listing). This is available only in online versions of the books. The printed versions are black and white. Pagination might vary between the online and printer versions; the content is otherwise identical.
Copyright 2013 The Pragmatic Programmers, LLC. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher.
Daniel Sauter
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and The Pragmatic Programmers, LLC was aware of a trademark claim, the designations have been printed in initial capital letters or in all capitals. The Pragmatic Starter Kit, The Pragmatic Programmer, Pragmatic Programming, Pragmatic Bookshelf, PragProg and the linking g device are trademarks of The Pragmatic Programmers, LLC. Every precaution was taken in the preparation of this book. However, the publisher assumes no responsibility for errors or omissions, or for damages that may result from the use of information (including program listings) contained herein. Our Pragmatic courses, workshops, and other products can help you and your team create better software and have more fun. For more information, as well as the latest Pragmatic titles, please visit us at http://pragprog.com. The Android robot is reproduced from work created and shared by Google and is used according to terms described in the Creative Commons 3.0 Attribution License (http://creativecommons.org/licenses/by/3.0/us/legalcode). The team that produced this book includes: John Osborn (editor) Potomac Indexing, LLC (indexer) Molly McBeath (copyeditor) David J Kelly (typesetter) Janet Furlow (producer) Juliet Benda (rights) Ellie Callahan (support)
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. Printed in the United States of America. ISBN-13: 978-1-93778-506-2 Encoded using the finest acid-free high-entropy binary digits. Book version: P1.0April 2013
Preface
Processing is a favorite among artists and designers and widely popular among developers who look for a productivity edge.1 The programming language and environment has developed from a sketching tool to a production environment for a range of operating systems and platforms. The Android mode, introduced to Processing with the release of version 2.0, now makes it as easy to develop Processing apps for the Android as for the desktop. Initiators Ben Fry and Casey Reas have promoted software literacy since 2001 using Processing, a free open source tool that can be used by individuals at any level of programming experience. The Processing project thrives on the support of its generous online community, whose members encourage collaboration and code sharing and are responsible for one of Processings most important features: its libraries. Libraries have made Processing the versatile and capable coding environment that it is today. Members have contributed more than 130 libraries to Processing over the last decade. I have extensively used Processing in the classroom during the past eight years and have realized various projects with it, sometimes in conjunction with Processings sister project, Arduino.2 In 2010, I started the Ketai (the Japanese term for cell phone culture) library with Jesus Duran,3 which brings Android hardware features to Processing and makes it possible to work with sensors and hardware devices using simple and easyto-read code. In comparison, developing Android apps in Java using the standard Eclipse IDE entails a much steeper learning curve,4 one that requires a programmer to master both the syntax of a modern object-oriented language and the features of a complex development environment. In Processing, we can see results
1. 2. 3. 4.
http://processing.org/ http://arduino.cc/ http://code.google.com/p/ketai/ http://developer.android.com/sdk/eclipse-adt.html
Preface
vi
immediately because we are working with a straightforward syntax and a wide range of libraries and tools designed specifically to support visually lavish and highly interactive applications. Android users expect a rich, interactive mobile user experience from their phones and tablets, one that takes full advantage of their touch screens, networking hardware, sensors for geolocation and device orientation, builtin cameras, and more. In this book, well learn how to create apps for Android devices that take full advantage of their many built-in hardware affordances.
vii
Lets take a look at some of the main advantages to developing Android apps in Processing: If you are new to programming, Processing for Android is much easier to learn than Java. If you are an experienced Java programmer already, Processing is a great programming environment for rapid prototyping of graphics and sensor-heavy apps. Processing uses straightforward syntax. In comparison to Java, it is more concise.6 Processing doesnt require you to understand advanced concepts such as classes or screen buffering to get started, yet it makes them accessible to any advanced users who want to use them. This makes Processing programs shorter and easier to read. The lightweight programming environment installs quickly and is easy to use. Processing is available for GNU/Linux, Mac OS X, and Windows. If you work with multiple computers or want to help someone else get started quickly, being up and running in a few minutes can make all the difference. Processing for Android supports OpenGL. When working with GPUaccelerated 2D and 3D graphics and geometry, lights, or textures, comprehensive OpenGL support is essential to ensure reasonably high frame rates and a fluid user experience. The latest version of Processing supports three application environments, or modes. Applications written in Java mode will run on Linux, Mac, or Windows systems. Programs written in Android mode will run on Android devices, and those written in JavaScript mode will run in any HTML5 browser. The Android mode is designed for creating native Android apps. Once your sketch prototyping is done, you can easily move your work to Eclipse for debugging and deployment. Processing lets you export your sketches as Android projects in the File Export Android Project menu, creating an android directory with all the necessary files in it. Though currently deactivated and still under development, Processing will also facilitate the process of publishing to Google Play using a builtin dialog that guides you through the signing and releasing process (File Export Signed Package).7
6. 7.
http://wiki.processing.org/w/Java_Comparison https://play.google.com/store
Preface
viii
This list of advantages should provide you all the evidence you need to conclude that Processing is a great environment for developing Android apps. Your projects can scale in scope and context: from sketch to prototype and from prototype to market-ready application, from CPU-focused graphics rendering to hardware-accelerated GPU-focused rendering, from Processing statements and libraries to Android and Java statements and packages, and from a particular operating system and device to other operating systems and devices. You wont need to worry about a different last-minute route or an alternative solution for your software projects. Projects can grow with you and will let you enjoy the iterative process of design and development.
Prerequisites
ix
Prerequisites
If you have never programmed in Processing or any other language before, you can turn to two excellent sources to get you up to speed; Ive listed them at the end of this paragraph. You need to have an idea of the basic principles of programming to fully enjoy the book, such as the use of variables, conditionals, and loops. If you feel a little shaky with any of those concepts, I recommend you get one of the two books and keep it close by for frequent consultation. If you have scripted or programmed before, even if only at a basic level, you should be able follow the examples in this book with a close read. Getting Started with Processing [RF10] This casual, inexpensive book is a concise introduction to Processing and interactive computer graphics.8 Written by Processings initiators, it takes you through the learning process one step at a time to help you grasp core programming concepts. Processing: A Programming Handbook for Visual Designers and Artists, Second Edition [RF11] This book is an introduction to the ideas of computer programming within the context of the visual arts.9 It targets an audience of computersavvy individuals who are interested in creating interactive and visual work through writing software but have little or no prior experience.
Preface
Chapter 3, Using Motion and Position Sensors, on page ?, introduces us to all the device sensors built into an Android. Well display accelerometer values on the Android screen, build a motion-based color mixer, and detect a device shake. In Part II, well be working with the camera and location devices found on most Androids. Chapter 4, Using Geolocation and Compass, on page ?, shows us how to write location-based apps. Well determine our location, the distance to a destination and to another mobile device on the move, and calculate the speed and bearing of a device. Chapter 5, Using Android Cameras, on page ?, lets us access the Android cameras through Processing. Well display a camera preview of the front- and back-facing cameras, snap and save pictures to the cameras SD card, and superimpose images. In Part III, well learn about peer-to-peer networking. Chapter 6, Networking Devices with Wi-Fi, on page ?, teaches us how to connect the Android with our desktop via Wi-Fi using the Open Sound Control protocol. Well create a virtual whiteboard app, where you and your friends can doodle collaboratively, and well build a marble-balancing game, where two players compete on a shared virtual board. Chapter 7, Peer-to-Peer Networking Using Bluetooth and Wi-Fi Direct, on page ?, shows us how to use Android Bluetooth technology to discover, pair, and connect Android devices. Well create a remote cursor sketch and build a survey app to share questions and answers between devices. Chapter 8, Using Near Field Communication (NFC), on page ?, introduces us to the emerging short-range radio standard designed for zeroclick interaction at close proximity and is expected to revolutionize the pointof-sale industry. Well read and write NFC tags and exchange data between Android devices via NFC and Bluetooth. Part IV deals with data and storage, as all advanced apps require some sort of data storage and retrieval to keep user data up-to-date. In Chapter 9, Working with Data, on page ?, well load, parse, and display data from text files and write data to a text file in the Android storage. Well also connect to a data source hosted online to create an earthquake app that visualizes currently reported earthquakes worldwide. Chapter 10, Using SQLite Databases, on page ?, introduces us to the popular SQLite database management system and Structured Query Language. Well record sensor data into a SQLite database and query it for particular data attributes.
Part V gets us going with 3D graphics and cross-platform apps. Chapter 11, Introducing 3D Graphics with OpenGL, on page ?, will show us how to work with 3D primitives, how virtual light sources are used, and how cameras are animated. Chapter 12, Working with Shapes and 3D Objects, on page ?, deals with 2D
xi
vector shapes and how to load and create 3D objects. Chapter 13, Sharing and Publishing Applications, on page ?, opens up our mobile app development to a wide range of devices and platforms using the JavaScript mode in Processing. Well discuss some of the benefits of web apps being able to run on all modern browsers and the range of limitations using built-in device hardware.
Preface
xii
The projects in this book require at least one Android device. To complete Part III, you need two Android devices. This allows us to run and test the sketches on the actual hardware, use the actual sensors, and get the actual mobile user experience that is the focus of this book.
Figure 1Tested Android phones and tablets. Clockwise from top left: ASUS Transformer Prime, Samsung Galaxy SIII, Samsung Nexus S, and Google Nexus 7 Asus Transformer Prime Tablet with 32 GB memory (Ice Cream Sandwich, Jelly Bean) Samsung Galaxy SIII (Ice Cream Sandwich, Jelly Bean) Samsung Nexus S (Ice Cream Sandwich, Jelly Bean) Google Nexus 7 with 8 GB memory (Jelly Bean) All the code is available online. Feel free to comment and drop some feedback!
Online Resources
You can download the complete set of source files from the books web page at http://pragprog.com/titles/dsproc/source_code. The compressed file available online contains all the media assets you need organized by chapter directories and
Online Resources
xiii
individual projects. If youre reading the ebook, you can also open the discussed source code just by clicking the file path before the code listings. The online forum for the book, located at http://forums.pragprog.com/forums/209, provides a place for feedback, discussion, questions, andI hopeanswers as well. In the ebook, youll find a link to the forum on every page next to a report erratum link that points to http://pragprog.com/titles/dsproc/errata, where you can report errors such as typos, technical errors, and suggestions. Your feedback and suggestions are very much appreciated. Lets get started! Once were done installing our software tools in Chapter 1, Getting Started, on page ?, we are only minutes away from completing our first Android app. Daniel Sauter Associate Professor of New Media Art, University of IllinoisChicago School of Art and Design
[email protected]
Chicago, 2013-03-4
2.7
Introducing 2D Transformations
For this project, well lock our app into LANDSCAPE orientation() so we can maintain a clear reference point as we discuss 2D transformations in reference to the coordinate system. To center our rectangle on the screen when we start up, to scale from its center point using the pinch gesture, and to rotate it around its center point using the rotate gesture, we need to work with two-dimensional (2D) transformations.28 Well use the Processings rectMode(CENTER) method to overwrite the default way a rectangle is drawn in Processing,29 which is from the upper left corner of the rectangle located at position [x, y] with a specified width and height. Instead we draw it from its center point using rectMode(CENTER), which allows us to rotate and scale it around its center point. A common metaphor to explain 2D transformations is a grid or graph paper. Using this analogy, each grid cell stands for one pixel of our apps display
26. 27. 28. 29.
http://processing.org/reference/rect_.html http://processing.org/reference/random_.html http://processing.org/learning/transform2d/ http://processing.org/reference/rectMode_.html
Figure 7Using multitouch gestures. The illustration shows a rectangle scaled with a twofinger pinch gesture, turned by a two-finger rotation gesture, placed on a magenta background color, and triggered by a flick, as well as a gray fill color caused by a long press. The text DOUBLE appears due to a double-tap gesture at the position indicated by the hand silhouette. window. The default origin in Processings coordinate system is always the upper left corner of the window. Each graphic element is drawn relative to this origin onto the screen. To move and rotate our rectangle, well use Processings transformation methods: translate() and rotate().30 We also have a scale() method,31 which we wont use in this sketch. When we draw graphic objects in Processing on our grid paper, we are used to specifying the rectangles horizontal and vertical coordinates using x and y values. We can use an alternative method, which is necessary here, where we move our grid (paper) to specified horizontal and vertical coordinates, rotate, and then draw the rotated rectangle at position x and y [0, 0]. This way the rectangle doesnt move to our intended position, but our grid paper (coordinate system) did. The advantage is that we can now rotate() our rect() right on the spot around its center point, something we cant do otherwise. Whats more, we can introduce a whole stack of grid paper if wed like to by using the Processing methods pushMatrix() and popMatrix(). When we move, rotate, and scale multiple elements and would like to transform them separately, we
need to draw them on separate pieces of grid paper. The pushMatrix() method saves the current position of our coordinate system, and popMatrix() restores the coordinate system to the way it was before pushing it. Like our first project in this chapter, in which we used Processings mousePressed(), mouseReleased(), and mouseDragged() callback methods to identify touches to the screen, some of the multitouch gestures introduced here fulfill the same purpose. If wed like to use Processings mouse methods alongside multitouch methods provided by KetaiGesture, well need to notify the superclass method surfaceTouchEvent() to notify the Processing app that a surface touch event has occurred.32 Now lets take a look at our multitouch code.
import import
x = width/2; y = height/2;
void draw() { background(bg); pushMatrix(); translate(x, y); rotate(rectAngle); fill(c); rect(0, 0, rectSize, rectSize);
32. http://processing.org/reference/super.html
} popMatrix();
void
{
onTap(float x, float y)
void
{
onDoubleTap(float x, float y)
text("DOUBLE", x, y-10); println("DOUBLE:" + x + "," + y); if (rectSize > 100) rectSize = 100; else rectSize = height - 100; }
void
{
onLongPress(float x, float y)
void
{
text("FLICK", x, y-10); println("FLICK:" + x + "," + y + "," + v); bg = color(random(255), random(255), random(255)); } void onPinch(float x, float y, float d) { rectSize = constrain(rectSize+d, 10, 500); println("PINCH:" + x + "," + y + "," + d); } onRotate(float x, float y, float angle)
void
{
void
{
mouseDragged()
public
boolean surfaceTouchEvent(MotionEvent event) { //call to keep mouseX and mouseY constants updated super.surfaceTouchEvent(event); //forward events return gesture.surfaceTouchEvent(event);
Lets take a look at the steps we need to take to capture and use multitouch gestures on the Android touch screen. Import Ketais ui package to give us access to the KetaiGesture class. Import Androids MotionEvent package. Define a variable called gesture of type KetaiGesture. Set a variable we call rectSize to 100 pixels to start off. Define the initial color c (white), which well use as a fill color for the rectangle and text. Define the initial color bg (dark green), which well use as a background color. Instantiate our KetaiGesture object gesture. Set the initial value for our variable x as the horizontal position of the rectangle. Set the initial value for y as the vertical position of the rectangle. Push the current matrix on the matrix stack so that we can draw and rotate the rectangle independent of other UI elements, such as the text. Move to the position [x, y] using translate(). Pop the current matrix to restore the previous matrix on the stack. Use the callback method onTap() to display the text string SINGLE at the location (x, y) returned by KetaiGesture. Use the callback method onDoubleTap() to display the text string DOUBLE at the location returned by KetaiGesture, indicating that the user triggered a
10
double-tap event. Use this event to decrease the rectangle size to the original 100 pixels if its currently enlarged, and increase the rectangle scale to the display height minus 100 pixels if its currently minimized to its original scale. Use the callback method onLongPress() to display the text string LONG at the location (x, y) returned by KetaiGesture. Use this event to randomly select a new color c using random(), which well use as a fill color for the rectangle. Use the callback method onFlick() to display the text string FLICK at the location x and y returned by KetaiGesture. Also, receive the previous location where the flick has been initiated as px and py, as well as the velocity v. Use the callback method onPinch() to calculate the scaled rectSize using the pinch distance d at the location x and y returned by KetaiGesture. Use the callback method onPinch() to calculate the scaled rectSize using the pinch distance d at the location x and y returned by KetaiGesture. Use Processings mouseDragged() callback to update the rectangle position (x and y) by the amount of pixels moved. Determine this amount by subtracting the previous pmouseX from the current mouseX, and pmouseY from mouseY. Move the rectangle only if absolute distance between the rectangle and the mouse position is less than half the rectangles size, or when we touch the rectangle. Use the Processing method surfaceTouchEvent() to notify Processing about mouse/finger-related updates. Lets test the app.
8.3
Figure 30Broadcast pixels using NFC and Bluetooth. Touching NFC devices back-toback initiates the Bluetooth connection, starting a two-directional pixel broadcast. The camera preview is then sent from one device to the other and displayed there. The top image shows the sampled camera image after two taps, the bottom image after four. When we run the app on the networked Androids, we will get a sense of how much data we can send via Bluetooth and at what frame rate. Well revisit concepts from previous chapters where we worked with a live camera preview, Chapter 5, Using Android Cameras, on page ?, and sent Bluetooth messages, Chapter 7, Peer-to-Peer Networking Using Bluetooth and Wi-Fi Direct, on page ?, now using NFC to initiate the network.
13. http://en.wikipedia.org/wiki/Recursion_%28computer_science%29
us to iterate through the image until we reach a number of divisions that well set beforehand with a variable well name divisions. Setting a limit is important since the recursion would otherwise continue forever, eventually freezing the app. Lets name the recursive function interlace(). Each time it runs when we tap the screen, it will split each pixel in the current image into four new pixels. The interlace() method well create works with the divisions parameter to control how many recursions will be executed. Well start with a divisions value of 1, for one division. Each time we tap the screen, divisions will increase to 2, 3, and so on, which will also increase the level parameter in our interlace() method. There we are using level to check that it has a value greater than 1 before recursively calling the interlace() method again to split each pixel into four. In the main tab we also import the Ketai camera package, which is familiar to us from Chapter 5, Using Android Cameras, on page ?. Well create a KetaiCamera object that well name cam. The cam object will read the image each time we receive a new frame from the camera. For this sketch, well use the following tabs to organize our code:
NFCBTTransmit Contains the main sketch, including our setup() and draw() meth-
ods, along with the interlace() method for recursively splitting the camera preview image. It also contains a mousePressed() method to increase the global variable divisions, used as a parameter for interlace(), and a keyPressed method that allows us to toggle the local camera preview on and off.
ActivityLifecycle Contains all the methods we need to start NFC and Bluetooth
correctly within the activity life cycle. We require a call to onCreate() for launching Bluetooth, onNewIntent() to enable NFC, and onResume() to start both NFC and Bluetooth.
Bluetooth A tab for the two Bluetooth methods, send() and onBluetoothDataEvent(),
are working with and the onNFCEvent() method that launches the Bluetooth connection when we received the other devices Bluetooth ID via NFC. Well create each of those tabs step by step and present the code for each component separately in the following sections. Lets first take a look at our main tab.
NFC/NFCBTTransmit/NFCBTTransmit.pde import android.content.Intent;
8
import android.app.PendingIntent; import android.content.Intent; import android.os.Bundle; import ketai.net.*; import oscP5.*; import netP5.*; import ketai.camera.*; import ketai.net.bluetooth.*; import ketai.net.nfc.*; KetaiCamera cam;
int
divisions = 1; String tag=""; void setup() { orientation(LANDSCAPE); noStroke(); frameRate(10); background(0); rectMode(CENTER); bt.start(); cam = new KetaiCamera(this, 640, 480, 15); ketaiNFC.beam("bt:"+bt.getAddress()); } void draw() { if (cam.isStarted()) interlace(cam.width/2, cam.height/2, cam.width/2, cam.height/2, divisions); if ((frameCount % 30) == 0) ketaiNFC.beam("bt:"+bt.getAddress()); }
void
{
if (level == 1) { color pixel = cam.get(x, y); send((int)red(pixel), (int)green(pixel), (int)blue(pixel), x, y, w*2, h*2); } if (level > 1) { level--; interlace(x - w/2, y - h/2, w/2, h/2, level); interlace(x - w/2, y + h/2, w/2, h/2, level);
divisions++;
Here are the steps we need to recursively process the live camera image. Set the initial number of divisions to 1, showing one fullscreen rectangle. Center the rectangle around the horizontal and vertical location where it is drawn, using rectMode(). Call the recursive function with starting values for each parameter, starting in the center of the camera preview. Use the following parameters for interlace(): horizontal position x, vertical position y, rectangle width w, rectangle height h, and the number of divisions. Get the pixel color at the defined x and y location in the camera preview image from the pixel located in the exact center of each rectangular area we use for the low-resolution preview. Send the pixel data using our user-defined function send(). Decrease the limit variable by 1 before recalling the recursive function. Decrease this variable and call the function only if the limit is greater than 1 to provide a limit. Call interlace() recursively from within itself using a new location and half the width and height of the previous call as parameters. Increment the number of divisions when tapping the screen. Now that we are done with our coding for the camera and the recursive program to create a higher-and-higher resolution image preview, lets create the code we need to activate NFC and Bluetooth in the activity life cycle.
10
Enable NFC and Bluetooth in the Activity Life Cycle
To use NFC and Bluetooth, we need to take similar steps in the activity life cycle as weve done for our Bluetooth peer-to-peer app. In Section 7.4, Working with the Android Activity Life Cycle, on page ?, we looked at the callback methods called during an activity life cycle. For this project, we need tell Android that wed like to activate both NFC and Bluetooth. Lets put the lifecycle code for the activity into an ActivityLifecycle tab. At the very beginning of the life cycle, onCreate(), well launch KetaiBluetooth by initiating our KetaiBluetooth object, and well tell Android that we intend to use 14 NFC. We do so using an intent, which is a data structure to tell Android that an operation needs to be performed. For example, an intent can launch another activity or send a result to a component that declared interest in it. Functioning like a kind of glue between activities, an intent binds events between the code in different applications. We need an Intent to launch NFC. When NFC becomes available because our activity is running in the foreground on top of the activity stack, we get notified via onNewIntent(), because we asked for such notification with our intent in onCreate(). This is where we tell Android that we use the result of the returned intent with our ketaiNFC object, launching NFC in our sketch. An activity is always paused before receiving a new intent, and onResume() is always called right after this method. When Bluetooth is available as the result of the Bluetooth activity we launched onCreate() while instantiating KetaiBluetooth, the connection is handed to us via onActivityResult(), which we then assign to our bt object. Finally, onResume(), we start our Bluetooth object bt and instantiate our NFC object ketaiNFC. Lets take a look at the actual code for ActivityLifecycle.
void
NFC/NFCBTTransmit/ActivityLifecycle.pde onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); bt = new KetaiBluetooth(this); ketaiNFC = new KetaiNFC(this); ketaiNFC.beam("bt:"+bt.getAddress()); } onNewIntent(Intent intent)
void
{
14. http://developer.android.com/reference/android/content/Intent.html
11
void
{ }
void
}
exit() { cam.stop();
void
We need these steps to initiate NFC and Bluetooth correctly within the activity life cycle. Instantiate the Bluetooth object bt to start a Bluetooth activity. Register the NFC intent when our activity is running by itself in the foreground using FLAG_ACTIVITY_SINGLE_TOP. Receive the NFC intent that we declared in onCreate(), and tell Android that ketaiNFC handles it. Receive the Bluetooth connection if it started properly when we initiated it in onCreate(). Release the camera when another activity starts so it can use it. Stop Bluetooth and the camera when the activity stops. All of this happens right at the beginning when our sketch starts up. The callback methods we are using require some getting used to. Because NFC and Bluetooth launch in separate treads or activities from our sketchand not sequentially within our sketchwe need the callback methods to get notified when the Bluetooth activity and the NFC intent have finished with their individual tasks. And because we depend on the successful delivery of the NFC payload for our Bluetooth connection, we need to use those callback methods and integrate them into the activity life cycle of our sketch. Processing and Ketai streamline many aspects of this programming process; when it comes to peer-to-peer networking between Android devices, we still need to deal with those essentials individually.
12
Now lets move on to the NFC tab, where we put the NFC classes and methods.
void
{
onNFCEvent(String s)
Here are the NFC steps we take. Receive the String from the NFC event using the onNFCEvent() callback method. Connect to the Bluetooth address weve received, removing the prefix bt: first. Finally, lets take a look at the Bluetooth tab.
13
OscMessage m = new OscMessage("/remotePixel/"); m.add(r); m.add(g); m.add(b); m.add(x); m.add(y); m.add(w); m.add(h); bt.broadcast(m.getBytes()); }
void
{
fill(r, g, b); rect(x, y, w, h); } void onBluetoothDataEvent(String who, byte[] data) { KetaiOSCMessage m = new KetaiOSCMessage(data); if (m.isValid()) { if (m.checkAddrPattern("/remotePixel/")) { if (m.checkTypetag("iiiiiii")) { receive(m.get(0).intValue(), m.get(1).intValue(), m.get(2).intValue(), m.get(3).intValue(), m.get(4).intValue(), m.get(5).intValue(), m.get(6).intValue()); } } } }
Here are the steps we take to send and receive OSC messages over Bluetooth. Add individual values to the OscMessage m. Send the byte data contained in the OSC message m via Bluetooth using broadcast(). Receive individual values sent via OSC, and draw a rectangle in the size and color determined by the received values. Check if all seven integers in the OSC message are complete before using the values as parameters for the receive() method. Now with our recursive program, camera, NFC, and Bluetooth code completed, its time to test the app.
14
Run the App
Before we run the app, we need to set two permissions. Open the Permission Selector from the Sketch menu and select CAMERA and INTERNET. Now browse to the sketch folder and open AndroidManifest.xml in your text editor, where youll see that those permissions have been set. Add NFC permissions so the file looks something like this:
NFC/NFCBTTransmit/AndroidManifest.xml <?xml version="1.0" encoding="UTF-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package=""> <uses-sdk android:minSdkVersion="10"/> <application android:debuggable="true" android:icon="@drawable/icon" android:label=""> <activity android:name=""> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.BLUETOOTH"/> <uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.NFC" /> </manifest>
Run the app on the device that is already connected to the PC via USB. When it launches, disconnect and run the app on your second Android device. Now its time for the moment of truthtouch both devices back-to-back and confirm the P2P connection. You should see a colored rectangle on each display, taken from the camera preview of the other device. If you move your camera slightly, youll recognize that its color is based on a live feed. Tap each screen to increase the resolution and observe what happens on the other device, then tap again. Each new division requires more performance from the devices as the number of pixels we send and display increases exponentially. Keep tapping and you will observe how the app slows as the size of the data payload increases. Now that weve learned how to send a Bluetooth ID via NFC Beam technology to another device, lets move on to reading and writing NFC tags.
8.3
Figure 30Broadcast pixels using NFC and Bluetooth. Touching NFC devices back-toback initiates the Bluetooth connection, starting a two-directional pixel broadcast. The camera preview is then sent from one device to the other and displayed there. The top image shows the sampled camera image after two taps, the bottom image after four. When we run the app on the networked Androids, we will get a sense of how much data we can send via Bluetooth and at what frame rate. Well revisit concepts from previous chapters where we worked with a live camera preview, Chapter 5, Using Android Cameras, on page ?, and sent Bluetooth messages, Chapter 7, Peer-to-Peer Networking Using Bluetooth and Wi-Fi Direct, on page ?, now using NFC to initiate the network.
13. http://en.wikipedia.org/wiki/Recursion_%28computer_science%29
us to iterate through the image until we reach a number of divisions that well set beforehand with a variable well name divisions. Setting a limit is important since the recursion would otherwise continue forever, eventually freezing the app. Lets name the recursive function interlace(). Each time it runs when we tap the screen, it will split each pixel in the current image into four new pixels. The interlace() method well create works with the divisions parameter to control how many recursions will be executed. Well start with a divisions value of 1, for one division. Each time we tap the screen, divisions will increase to 2, 3, and so on, which will also increase the level parameter in our interlace() method. There we are using level to check that it has a value greater than 1 before recursively calling the interlace() method again to split each pixel into four. In the main tab we also import the Ketai camera package, which is familiar to us from Chapter 5, Using Android Cameras, on page ?. Well create a KetaiCamera object that well name cam. The cam object will read the image each time we receive a new frame from the camera. For this sketch, well use the following tabs to organize our code:
NFCBTTransmit Contains the main sketch, including our setup() and draw() meth-
ods, along with the interlace() method for recursively splitting the camera preview image. It also contains a mousePressed() method to increase the global variable divisions, used as a parameter for interlace(), and a keyPressed method that allows us to toggle the local camera preview on and off.
ActivityLifecycle Contains all the methods we need to start NFC and Bluetooth
correctly within the activity life cycle. We require a call to onCreate() for launching Bluetooth, onNewIntent() to enable NFC, and onResume() to start both NFC and Bluetooth.
Bluetooth A tab for the two Bluetooth methods, send() and onBluetoothDataEvent(),
are working with and the onNFCEvent() method that launches the Bluetooth connection when we received the other devices Bluetooth ID via NFC. Well create each of those tabs step by step and present the code for each component separately in the following sections. Lets first take a look at our main tab.
NFC/NFCBTTransmit/NFCBTTransmit.pde import android.content.Intent;
8
import android.app.PendingIntent; import android.content.Intent; import android.os.Bundle; import ketai.net.*; import oscP5.*; import netP5.*; import ketai.camera.*; import ketai.net.bluetooth.*; import ketai.net.nfc.*; KetaiCamera cam;
int
divisions = 1; String tag=""; void setup() { orientation(LANDSCAPE); noStroke(); frameRate(10); background(0); rectMode(CENTER); bt.start(); cam = new KetaiCamera(this, 640, 480, 15); ketaiNFC.beam("bt:"+bt.getAddress()); } void draw() { if (cam.isStarted()) interlace(cam.width/2, cam.height/2, cam.width/2, cam.height/2, divisions); if ((frameCount % 30) == 0) ketaiNFC.beam("bt:"+bt.getAddress()); }
void
{
if (level == 1) { color pixel = cam.get(x, y); send((int)red(pixel), (int)green(pixel), (int)blue(pixel), x, y, w*2, h*2); } if (level > 1) { level--; interlace(x - w/2, y - h/2, w/2, h/2, level); interlace(x - w/2, y + h/2, w/2, h/2, level);
divisions++;
Here are the steps we need to recursively process the live camera image. Set the initial number of divisions to 1, showing one fullscreen rectangle. Center the rectangle around the horizontal and vertical location where it is drawn, using rectMode(). Call the recursive function with starting values for each parameter, starting in the center of the camera preview. Use the following parameters for interlace(): horizontal position x, vertical position y, rectangle width w, rectangle height h, and the number of divisions. Get the pixel color at the defined x and y location in the camera preview image from the pixel located in the exact center of each rectangular area we use for the low-resolution preview. Send the pixel data using our user-defined function send(). Decrease the limit variable by 1 before recalling the recursive function. Decrease this variable and call the function only if the limit is greater than 1 to provide a limit. Call interlace() recursively from within itself using a new location and half the width and height of the previous call as parameters. Increment the number of divisions when tapping the screen. Now that we are done with our coding for the camera and the recursive program to create a higher-and-higher resolution image preview, lets create the code we need to activate NFC and Bluetooth in the activity life cycle.
10
Enable NFC and Bluetooth in the Activity Life Cycle
To use NFC and Bluetooth, we need to take similar steps in the activity life cycle as weve done for our Bluetooth peer-to-peer app. In Section 7.4, Working with the Android Activity Life Cycle, on page ?, we looked at the callback methods called during an activity life cycle. For this project, we need tell Android that wed like to activate both NFC and Bluetooth. Lets put the lifecycle code for the activity into an ActivityLifecycle tab. At the very beginning of the life cycle, onCreate(), well launch KetaiBluetooth by initiating our KetaiBluetooth object, and well tell Android that we intend to use 14 NFC. We do so using an intent, which is a data structure to tell Android that an operation needs to be performed. For example, an intent can launch another activity or send a result to a component that declared interest in it. Functioning like a kind of glue between activities, an intent binds events between the code in different applications. We need an Intent to launch NFC. When NFC becomes available because our activity is running in the foreground on top of the activity stack, we get notified via onNewIntent(), because we asked for such notification with our intent in onCreate(). This is where we tell Android that we use the result of the returned intent with our ketaiNFC object, launching NFC in our sketch. An activity is always paused before receiving a new intent, and onResume() is always called right after this method. When Bluetooth is available as the result of the Bluetooth activity we launched onCreate() while instantiating KetaiBluetooth, the connection is handed to us via onActivityResult(), which we then assign to our bt object. Finally, onResume(), we start our Bluetooth object bt and instantiate our NFC object ketaiNFC. Lets take a look at the actual code for ActivityLifecycle.
void
NFC/NFCBTTransmit/ActivityLifecycle.pde onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); bt = new KetaiBluetooth(this); ketaiNFC = new KetaiNFC(this); ketaiNFC.beam("bt:"+bt.getAddress()); } onNewIntent(Intent intent)
void
{
14. http://developer.android.com/reference/android/content/Intent.html
11
void
{ }
void
}
exit() { cam.stop();
void
We need these steps to initiate NFC and Bluetooth correctly within the activity life cycle. Instantiate the Bluetooth object bt to start a Bluetooth activity. Register the NFC intent when our activity is running by itself in the foreground using FLAG_ACTIVITY_SINGLE_TOP. Receive the NFC intent that we declared in onCreate(), and tell Android that ketaiNFC handles it. Receive the Bluetooth connection if it started properly when we initiated it in onCreate(). Release the camera when another activity starts so it can use it. Stop Bluetooth and the camera when the activity stops. All of this happens right at the beginning when our sketch starts up. The callback methods we are using require some getting used to. Because NFC and Bluetooth launch in separate treads or activities from our sketchand not sequentially within our sketchwe need the callback methods to get notified when the Bluetooth activity and the NFC intent have finished with their individual tasks. And because we depend on the successful delivery of the NFC payload for our Bluetooth connection, we need to use those callback methods and integrate them into the activity life cycle of our sketch. Processing and Ketai streamline many aspects of this programming process; when it comes to peer-to-peer networking between Android devices, we still need to deal with those essentials individually.
12
Now lets move on to the NFC tab, where we put the NFC classes and methods.
void
{
onNFCEvent(String s)
Here are the NFC steps we take. Receive the String from the NFC event using the onNFCEvent() callback method. Connect to the Bluetooth address weve received, removing the prefix bt: first. Finally, lets take a look at the Bluetooth tab.
13
OscMessage m = new OscMessage("/remotePixel/"); m.add(r); m.add(g); m.add(b); m.add(x); m.add(y); m.add(w); m.add(h); bt.broadcast(m.getBytes()); }
void
{
fill(r, g, b); rect(x, y, w, h); } void onBluetoothDataEvent(String who, byte[] data) { KetaiOSCMessage m = new KetaiOSCMessage(data); if (m.isValid()) { if (m.checkAddrPattern("/remotePixel/")) { if (m.checkTypetag("iiiiiii")) { receive(m.get(0).intValue(), m.get(1).intValue(), m.get(2).intValue(), m.get(3).intValue(), m.get(4).intValue(), m.get(5).intValue(), m.get(6).intValue()); } } } }
Here are the steps we take to send and receive OSC messages over Bluetooth. Add individual values to the OscMessage m. Send the byte data contained in the OSC message m via Bluetooth using broadcast(). Receive individual values sent via OSC, and draw a rectangle in the size and color determined by the received values. Check if all seven integers in the OSC message are complete before using the values as parameters for the receive() method. Now with our recursive program, camera, NFC, and Bluetooth code completed, its time to test the app.
14
Run the App
Before we run the app, we need to set two permissions. Open the Permission Selector from the Sketch menu and select CAMERA and INTERNET. Now browse to the sketch folder and open AndroidManifest.xml in your text editor, where youll see that those permissions have been set. Add NFC permissions so the file looks something like this:
NFC/NFCBTTransmit/AndroidManifest.xml <?xml version="1.0" encoding="UTF-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package=""> <uses-sdk android:minSdkVersion="10"/> <application android:debuggable="true" android:icon="@drawable/icon" android:label=""> <activity android:name=""> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.BLUETOOTH"/> <uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.NFC" /> </manifest>
Run the app on the device that is already connected to the PC via USB. When it launches, disconnect and run the app on your second Android device. Now its time for the moment of truthtouch both devices back-to-back and confirm the P2P connection. You should see a colored rectangle on each display, taken from the camera preview of the other device. If you move your camera slightly, youll recognize that its color is based on a live feed. Tap each screen to increase the resolution and observe what happens on the other device, then tap again. Each new division requires more performance from the devices as the number of pixels we send and display increases exponentially. Keep tapping and you will observe how the app slows as the size of the data payload increases. Now that weve learned how to send a Bluetooth ID via NFC Beam technology to another device, lets move on to reading and writing NFC tags.