By Jeff LaMarche on February 8, 2011
Sorry for the lack of posts recently. Things have been, well… you know. Same old story. Super busy. Which is good, but it’s murder on blog post frequency.
I’ve recently had to port some OpenGL ES work I did from iOS to Android. It used to be that doing so would have been insanely painful (as opposed to just painful). I would have had to convert the Objective-C code to Java, and then maintain completely distinct sets of code that do the same exact thing. Fortunately, the Android NDK (Native Development Kit) allows you to write code for Android in C/C++. The version of the NDK supported on 2.2 still requires part of the Activity (Android’s counterpart to an iOS view controller) to be written in Java, but does allow you to call C/C++ code using JNI. In 2.3 and 3.0, you can do entire activities in C or C++.
This is a huge step forward for Android for those of us who do performance-critical work on multiple platforms, but it’s not without some pain. Debugging across the JNI bridge is… less than easy. But, being able to share code across platforms is a huge win, and being able to get native speeds in the process is teh awseome.
During these projects, I’ve been taking a lot of my 3D-related code and creating a new set of platform-agnostic C functions and types. I’ve been cleaning up and making names consistent, and placing appropriate pre-compiler macros to make sure the code compiles correctly everywhere. On iOS, the library will take advantage of the Accelerate Framework in places, but doesn’t require Accelerate to function.
I’ve chosen C because I don’t like mixing C++ and Objective-C. The object models are two different for my tastes. But I’ve also made sure to include proper ifdef’d extern statements so that you can import the MC3D header files from C++ without hassle.
I’ve dubbed this set of functions MC3D, and I’m making it open source under a simplified version of the simplified BSD license (simplified simplified BSD license?). I’ve taken out the attribution requirement, so the only requirement is that if you re-distribute the source code, you have to leave the copyright and license text intact. That’s it. Otherwise, you can use it for free in any project, commercial or otherwise, without paying anything, without attributing, and without asking (no really, you don’t need to ask).
MC3D is still very much a work in progress, and I’m only adding code to the repository that I feel is ready for public consumption. Much of what’s in MC3D has been posted here before, sometimes with different names or in slightly different form.
I have other code that I plan to add in the future, including higher-level functionality like model loading, scene management, and skeletal animation, but I won’t add anything until its both solid and platform agnostic.
Currently, documentation is very sparse, and I currently can’t offer any support or help with using it, so caveat emptor! I will gladly accept contributions, bug fixes, and new functionality back into the MC3D codeline.
Link fixed, sorry about that
By adam.wulf on February 3, 2011
By Jeff LaMarche on October 29, 2010
It is not an exaggeration to say that the iPhone SDK and the App Store have forever changed the way that mobile applications are developed and sold. By building the iPhone SDK on the foundation laid by NeXT with NextSTEP, which later became Apple’s Cocoa framework for developing desktop applications, Apple was able to provide third-party developers of their new mobile platform with tools and some APIs that already had the benefit of over 20 years of use, testing, and documentation. Although iOS, of course, contains a great amount of new code designed specifically to handle the needs of a touch-based, mobile computing platform, many of the classes that implement fundamental behavior in the iOS SDK have been in regular use since the late 1980s; that code is extraordinarily robust and thoroughly documented.
But a mobile platform is different from a desktop or laptop computer in many ways, and not all of the technology that makes up the iPhone is as well-documented or as well-understood as the foundation classes inherited from NextSTEP. One such technology is OpenGL ES, a graphics library designed for use on smaller devices, with limited processing power and memory (the ES stands for embedded systems). Although the iPhone, iPod touch, and iPad are, in many ways, engineering marvels, they are still considerably underpowered compared to today’s laptop and desktop computers. They have less RAM, slower processors with fewer processing cores, and a less powerful GPU than even inexpensive general-purpose computers. iOS applications, such as games, that want to fully leverage the graphics capabilities of the iPhone generally have to use OpenGL ES to get the best possible performance out of the hardware.
Yet if you go looking for specific beginner-level information about how to use OpenGL ES on the iPhone, it can be hard to find. Although there are a great many books, tutorials, and articles on OpenGL, of which OpenGL ES is a subset, nearly every one starts out teaching something called direct mode, which doesn’t exist in OpenGL ES (or the most recent OpenGL specification, for that matter). Direct mode was one of the earliest ways to interact with OpenGL, but it’s not used much in practice because it’s slow. In direct mode, you perform a separate C function call for every single piece of data or instruction you need to pass in to OpenGL. To draw a triangle, for example, you have to make four function calls (in addition to any setup code), one call to define the location of each of the three points that make up the triangle, then another function call to actually draw the triangle. For complex objects, direct mode code quickly becomes tedious and inefficient.
Direct mode was kept in workstation OpenGL for many years past when it was a viable option not just for backward compatibility, but also because it was a tremendous tool for learning. By having to break the drawing process down into all these individual function calls, the programmer who is new to graphics programming and the mathematics of drawing is able to conceptualize what is going on more easily. After spending a while with direct mode, new developers can begin to understand how OpenGL works; by the time they are introduced to more efficient ways of submitting data to OpenGL, they have a good grounding and are ready for the conceptually harder techniques.
Without direct mode, programmers new to OpenGL ES are forced to begin using these harder techniques immediately. There’s no gradual entry into the OpenGL ES pool: you have to just jump right into the deep end. And if you don’t already know how to swim, jumping in to the deep end can be pretty intimidating.
To make matters worse, to fully leverage the power of today’s iOS devices, you have to use OpenGL ES 2.0, and OpenGL ES 2.0 has an even steeper learning curve than earlier versions. OpenGL ES 2.0 dropped support for something called fixed pipeline rendering which provided a number of stock function calls for handling common tasks such as setting up lights, moving and rotating objects, and defining the part of the world to be rendered. Under the fixed pipeline, for example, if you wanted to rotate an object, you would simply call the built-in function glRotatef() before drawing, which would tell OpenGL ES how far and on what axis to rotate the object before drawing it. With OpenGL ES’s focus on performance, once the programmable pipeline was introduced in OpenGL ES 2.0, support for the fixed pipeline was completely dropped, meaning all those convenient methods for setting up and moving objects around your scene are gone. OpenGL ES 1.1 applications will not even compile under OpenGL ES 2.0. Not only is OpenGL ES 2.0 the deep end of the pool, it’s the deep end of a very deep, very wide, and very cold pool.
By (author unknown) on June 1, 2010
By danemery on March 18, 2010
and with stunning results!Every day I see a a lot of good content go through my feeds and occasionally they are some real gems which lower the barrier for people to create great designs.