Problems with CDB Debugger in QtCreator

Some time ago Locals and Expressions view in QtCreator just stopped working for me. No locals were listed when the program stopped on a breakpoint, watches did not show the values, hovering with the mouse over a variable in the editor did not show its value in a tooltip. No fiddling with IDE options helped. This was on Windows using compiler from Visual C++ and CDB as a debugger.

It was quite annoying but I was mostly working on UI at that time and could live with occasional dumping of few variables into a log. Few weeks later I moved to some math heavy stuff and the situation became desperate — I had to fix this!

At first, I suspected the debugger or the compiler. Since reinstalling and trying different versions did not help I also tried different versions of Qt and QtCreator itself. Still broken. And Googling for broken CDB only revealed problems with CDB getting unusably slow.

After QtCreator reinstall, I noticed the settings were preserved from the old version. So maybe they are somehow corrupted?  I found them in C:\Users\username\AppData\Roaming\QtProject\qtcreator\, made a backup of that folder, and deleted it. And it started working! Locals and Expressions were back!

Of course, I also wanted my old settings back. So I switched back to the old settings and started QtCreator again and to my surprise it still worked! I took a closer look and saw that the problem lies in the session file (*.qws) — when I previously deleted the settings folder QtCreator created a new session for me and this one I reopened in the IDE after the switch back to the old settings (just copied the files into the folder so the new session file stayed there).

So I opened the session file with an intention to start deleting suspicious stuff until it works. It’s a XML file and apart from one big base64 blob there is not much else. Besides a breakpoint list, the only thing that seemed related to debugging was a list of maybe ten old watch expressions (which were not listed in the IDE anymore!). I deleted it and voilà, I could see the values of locals and variables again!

After a few weeks, debugging started to get slow. Stepping over code took a few seconds and values of the variables showed up only after considerable time (tens of seconds). This time I knew what to do, deleting watch expressions from the session file helped again. So I came to the conclusion that in the first case the values could eventually show up after a few minutes (stepping over code was not slowed down though). It was just very very slow. Then the Googled complaints about slow CDB made more sense to me. And indeed, I found out someone fixed it the same way but I was not paying attention to just slow debugging before, I was looking for broken debugging!

Recapitulation

  1. Debugging in QtCreator using CDB in Windows is very slow and/or values of locals and expressions never show up.
  2. Go to the folder where QtCreator stores your session file (C:\Users\username\AppData\Roaming\QtProject\qtcreator\).
  3. Open the session file (*.qws) in text editor and look for this XML subtree:
    <data>
        <variable>value-Watchers</variable>
        <valuelist type="QVariantList">
            <value type="QString">some expression 1</value>
            <value type="QString">some expression 2</value>
            <value type="QString">some expression 3</value>
        </valuelist>
    </data>
    
  4. Delete this subtree (or just the expressions) and save the file.
  5. Start QtCreator, open the fixed session, and do some debugging.

.NET and Java: Generating Interoperable AES Key and IV

Let’s assume we want to generate encryption key and initialization vector (IV) for AES encryption based on some passphrase. And we want to be able to generate the same key and IV for the same passphrase in .NET and Java – maybe we have Android app written in Java that needs to decrypt message from ASP.NET web app.

In .NET, Rfc2898DeriveBytes class is often used to derive keys of specified length according to given passphrase, salt, and iteration count (RFC2898 / PBKDF2). For 256-bit key and 128-bit key it is as simple as this:

var keyGen = new Rfc2898DeriveBytes(passwordBytes, 
    saltBytes, iterationCount);

byte[] key = keyGen.GetBytes(256 / 8);
byte[] iv = keyGen.GetBytes(128 / 8);

Fortunately, PBKDF2 implementation is also built-in in Java:

SecretKeyFactory factory = 
    SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
KeySpec spec = new PBEKeySpec(passwordChars, 
    saltBytes, iterationCount, 256);
SecretKey secretKey = factory.generateSecret(spec);
byte[] key = secretKey.getEncoded();

We have the same key byte array, albeit with some more typing. And how about the initialization vector now? One could think that creating new PBEKeySpec with a length of 128 is the way to go. I know I did.

However, you would just get the same bytes as for the key (the first half of them). This key derivation algorithm is deterministic so for the same inputs you get the same output. Each call of GetBytes of .NET’s Rfc2898DeriveBytes just returns more and more bytes generated by the algorithm whereas Java implementation needs to know the total output length upfront. So for 256-bit key and 128-bit IV we need to create PBEKeySpec with the length of 384 and split the result between key and IV:

KeySpec spec = new PBEKeySpec(passwordChars, 
    saltBytes, iterationCount, 256 + 128);
SecretKey secretKey = factory.generateSecret(spec);

byte[] data = secretKey.getEncoded();
byte[] keyBytes = new byte[256 / 8];
byte[] ivBytes = new byte[128 / 8];

System.arraycopy(data, 0, keyBytes, 0, 256 / 8);
System.arraycopy(data, 256 / 8, ivBytes, 0, 128 / 8);		

Note: All the Java stuff was tested only with Android.

Multilevel Geomipmapping Program + Sources Released

There’s been a few requests for source code of Multilevel Geomipmapping terrain rendering. So I’m doing this now, finally. It has not been touched since 2008 but it compiles fine in current version of Lazarus. I tested it in Windows only but in 2008 it had run in Linux and FreeBSD as well. Unfortunately, not all the test terrain data could be included because of their massive size.

You can find more info in included Readme and previously linked article. Note: release archive is in 7z format to get smaller download size.

  Multilevel Geomipmapping
» 92.1 MiB - 3,710 hits - May 5, 2014 (last update May 5, 2014)
Terrain renderer using OpenGL. Includes Object Pascal source code, binaries, and test data.

  • Small terrain 2k x 2k
  • Multilevel tree nodes and wireframe display

Deskew Tool Version 1.10

New version of Deskew command line tool is ready. You can find general info about Deskew here Deskew Tools.

Change List for Deskew 1.10

  • TIFF support now also for Win64 and 32/64bit Linux platforms
  • forced output formats
  • fix: output file names were always lowercase
  • fix: preserves resolution metadata (e.g. 300dpi) of input when writing output

Continue reading

Android Terrain Rendering: Vertex Texture Fetch, Part 1

To my surprise, I found out that GPU (PowerVR SGX 540) in my venerable Nexus S (2010) supports vertex texture fetch (VTF). That is, accessing texture pixels in vertex shader — a very useful feature for terrain rendering. About a year ago, when I started investigating terrain rendering on Android devices, I did some searching for VTF support on Android devices and figured out it’s not there yet (similar to situation years ago when desktop OpenGL 2.0 was released with support for texture sampling in GLSL verter shaders but most GL implementation just reported GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS to be zero). Now I don’t know how I missed it on my own phone, maybe there was some Android update with updated GPU drivers during last year? I have no idea how many other devices support it now. Hopefully, newest ones with OpenGL ES 3 support it all. I wouldn’t be surprised if among GLES 2 devices only PowerVR + Android 4+ combinations supported it.

Overview

Anyway, let’s focus on terrain rendering – rough outline:

  1. Put the entire heightmap into a texture.
  2. Have a small 2D grid (say 16×16 or 32×32) geometry ready for rendering terrain tiles.
  3. Build a quad tree over over the terrain. Root node covers entire terrain and each child then covers 1/4th of parent’s area.
  4. Now we can start rendering, do this every frame:
    1. Traverse the quadtree starting from the root.
    2. For each child node test if the geometry grid provides sufficient detail for rendering the area covered by this node:
      • YES it does, mark this node for rendering and end traversal of this subtree.
      • NO it does not, continue traversal and test children of this node (unless we’re at leaf already).
    3. Now take this list of marked nodes and render them. The same 2D grid is used to render each tile: it’s scaled according to tile’s covered area and its vertices are displaced by height values read from texture.
Root covers entire terrain, each child quarter of the parent's area.

Root covers entire terrain, each child quarter of the parent’s area.

This is basically what I originally wanted for Multilevel Geomipmapping years ago but couldn’t do in the end because of the state of VTF support on desktop at that time.

So what exactly is the benefit of VTF over let’s say geomipmapping here?

Main benefit is ability to get height of the terrain at any position (and multiple times) when processing each tile vertex. In traditional geomipmapping, even if you can move tile vertices around it’s no use since you have only one fixed height value available. With VTF, you can modify tile position of vertex as you like and still be able to get correct height value. This greatly simplifies tasks like connecting neighboring tiles with different levels of detail. No ugly skirts or special stitching stripes of geometry are needed as you can simply move edge points around in the vertex shader. Also geomorphing solutions begin to be usable without much work. And you can display larger terrains as well. With geomipmapping you always have to draw fixed number of tiles (visible leaves) — a number that goes up fast when you enlarge the terrain. VTF may allow you draw fixed number of tiles — regardless of actual terrain size (as distant tiles cover much larger area compared to geomipmap tiles with fixed area). Another one, terrain normals can be calculated inside the shaders from neighboring height values.
Finally, since heightmap is now a regular texture you get filtering and access to compressed texture formats to stuff more of the terrain to the memory.

There must be some disadvantages, right?

Sure, the support for VTF on mobile GLES 2 GPUs is scarce. So for something else than tech demo it’s useless for the time being. Hopefully, all GLES 3 GPUs will support VTF. And with usable performance – VTF was uselessly slow on desktops in the beginning.

Implementation

I have added experimental VTF based terrain renderer to Terrain Rendering Demo for Android testbed and it looks promising. Stitching of the tiles works flawlessly. More work is needed on selecting nodes for rendering (render node or split to children?). Currently, there’s only simple distance based metric but I want to devise something that takes classical “screen-space error” into account. And maybe some fiddling with geomorphing on top …

Follow some of the implementation details in part 2 (soon!).

VTF Terrain Shot

Terrain Rendering Demo for Android

I finally got around to releasing Android terrain rendering demo I’ve been working on last few months (a few moments here and there). I did the bulk of work in November 2012, partly described in posts Porting glSOAR to Android and OpenGL ES, Part 1 and Porting glSOAR to Android and OpenGL ES, Part 2 – third part is still just a draft :(

Anyway, here’s the current version which supports Geomipmapping and SOAR terrain rendering methods. Gritty details about the internals will follow in some future post. There is also a nearly identical desktop version for reference, advantage of using LibGDX for this.

Terrain in action

Downloads and Installs

Google Play Store for Android Version

  glTerrainJava for Desktop v0.30
» 15.3 MiB - 798 hits - June 18, 2013 (last update July 4, 2013)
Desktop version of Java terrain rendering demo.

Controls

When the demo starts you get to the main menu screen. Here you can select terrain LOD method and some parameters. Important one is “tolerance in pixels” which controls when part of terrain switches to coarser representation. Basically,  lower tolerance = better quality = lower performance.

On Android, just check “autowalk” in menu and later swipe finger on display to look around and change direction. Better/more controls are in todo list. On desktop, you can also use these keys when viewing the terrain: W/Up – forwards, S/Down – backwards, Ctrl – move really fast, +/- change tolerance, O – toggle wireframe overlay.

More Screens

Settings menu

Wireframe overlay

Future

  • Benchmark mode – terrain flyover
  • Some instructions inside
  • Controls for walking over the terrain on keyboard-less devices
  • Geomipmap tiles without skirts
  • LOD method using vertex texture fetch, will it actually run on any phone?
  • multithreading for mesh refinement

Limitations

Needs at least 2048×2048 max texture size (4096 on desktop as more detailed texture is used), GL_OES_element_index_uint for SOAR, and GL_OES_standard_derivatives for wireframe overlay. For instance, SOAR won’t run on Galaxy S3 with Mali-400 MP GPU. Also, wireframe overlay is only for Geomipmapping (uses barycentric coordinates for wireframe).

.NET and WPF Notes #1

I have been working in .NET and WPF lately. Of course, I ran into some issues and had to look up some solutions. I wrote some of it down, for “future reference” and for anyone who might be interested.

Design Time Data and Properties

When designing WPF data template for ListBox items I wanted some preview in design time with sample mock data for the items. Also, some simple way to override certain property values would be nice (e.g. when you bound brushes of path to some run time values but want to use fixed ones for design). To use the mock data you can use d:DataContext design time attribute. Some class with the sample data is created in code behind and then bound using this attribute.

public class MockMeasurementList
{
  private MeasurementList measurements = new MeasurementList();
  public MeasurementList Measurements { get { return measurements; } }

  public MockMeasurementList()
  {
    measurements.Add(new Measurement(
      new Position(49.3051356, 16.5607972, 358),
      new Vector(52.035075, 7.854967, 1492.690400));
  }
}
<UserControl.Resources>
    <mocks:MockMeasurementList x:Key="DesignList"/>
</UserControl.Resources>
<ListBox Name="ListMeasurements"
   d:DataContext="{Binding Source={StaticResource DesignList}}"
   ItemsSource="{Binding Measurements}">

WPF design time value for simple properties like brushes and sizes can be done for example using this approach of Marcin Najder. I use it for example for stroke and fill brushes of map markers:

<UserControl ... xmlns:dtools="clr-namespace:DesignTools">
  ...
  <Path Stroke="{Binding Stroke}" Fill="{Binding Fill}"
        dtools:d.Stroke="Navy" dtools:d.Fill="PowderBlue">
  ...

Events and Delegates Returning Bool

Suppose we have an event like this:

public event Func<bool> StoreMeasurementQuery;

and we want to do some action (e.g. store a measurement) only if all handlers subscribed to the event return true (Is the measurement valid? Is there enough storage?). Now if we raise the event the usual way (assuming StoreMeasurementQuery != null):

bool store = StoreMeasurementQuery();

the result won’t be as one could expect as store will hold the return value of only the last handler executed. To get some sensible results we have to execute the handlers individually and check out their return values. Based on this SO answer I wrote two these extensions:

public static bool AllSubscribersReturnTrue(this Func<bool> evt)
{
  return evt.GetInvocationList().Cast<Func<bool>>().
    Select(func => func()).ToList().All(ret => ret);
}

public static bool AnySubscriberReturnsTrue(this Func<bool> evt)
{
  return evt.GetInvocationList().Cast<Func<bool>>().
    Select(func => func()).ToList().Any(ret => ret);
}

to get logical AND and OR of all event handlers return values.

Grid Mouse Interaction

By default, Grid and other Panel controls don’t receive mouse events. If you want to use Grid for example as clickable container for list box items, you need to set some background to it. Background="Transparent" is good enough.

Google Apps Script Notes

Extend, Inherit

I did some work in Google Apps Script for a friend recently. After a while, I had a number of disorganized helper functions that were basically extensions for various objects from GAS Default Services (mostly batching more methods together, etc.). Later, I started wondering if there is a way to extend GAS objects directly. Unfortunately, there is currently no way to do that. And according to this issue with won’t fix resolution there will never be one.

So I decided to put the related functions to separated classes. And I wanted some sort of inheritance so that I could have something like this:

// Create GAS object
var table = uiApp.createFlexTable();
// Some basic table writer for generic usage
var writer = new TableWriter(table);
writer.writeRow(["a", "b", "c"], style);
// Specialized table generator, subclass of generic TableWriter
var builder = new ReportBuilder(table);
builder.addOrder(order);

Continue reading

Porting glSOAR to Android and OpenGL ES, Part 2

Porting glSOAR to Android and OpenGL ES, Part 1 was more about used libraries and Java. Now part 2 tells a story of transition from OpenGL to OpenGL ES for glSOAR terrain renderer.

glSOAR OpenGL ES Gotchas

Initially, I wanted to just use fixed pipeline because that’s what desktop glSOAR uses (remember, original SOAR is from 2001). So that meant using GLES 1.0/1.1 since GLES 2.0 removed all the fixed pipeline stuff (matrix settings, lighting, immediate mode, and so on). To quote the official GLES docs:

Note: Be careful not to mix OpenGL ES 1.x API calls with OpenGL ES 2.0 methods! The two APIs are not interchangeable and trying to use them together only results in frustration and sadness.

I have to admit I didn’t really check what features GLES actually has, somehow assuming that it would be on par with regular OpenGL (1.x core or some 2.0). Here’s the list of few problems I encountered during the conversion:

  1. There’s no automatic texture coordinate generation. Desktop glSOAR uses OpenGL to generate texture coordinates for terrain mesh (by the means of glTexGen) to save memory. Fortunately, simple workaround is possible by setting texture transformation matrix directly (details at fernlightning). Of course, when using GLES 2 you can just generate coordinates in the shader.
  2. No wireframe display in GLES! There’s no glPolygonMode so you only get filled triangles. Desktop glSOAR can display wireframe overlay over textured terrain to show off cLOD in action by drawing terrain in additional pass with polygon mode set to GL_LINE and using glPolygonOffset. Now in GLES, I could try rendering the terrain as GL_LINES instead of GL_TRIANGLES. That kind of works so far (getting wire quads instead triangles though) for simple terrain grid but it will probably break when cLOD is implemented.
  3. Then I hit the show stopper, at least for GLES 1.0/1.1. There’s no support for 32-bit indices (GL_UNSIGNED_INT enum for glDrawElements) in GLES core. And 16-bit indices are any good only for terrains with size of 129×129 and smaller (as with SOAR it just cannot be simply split to smaller chunks). Fortunately, there is a GLES extension that allows usage of 32-bit indices called GL_OES_element_index_uint. I’ve seen on GLBenchmark page that my phone and many others (at least those with Andreno and PowerVR GPUs) support it but my test program insisted otherwise. As it turned out, it’s only supported with GLES 2 context. So it was a goodbye to GLES 1 and fixed function pipeline…
  4. The move to GLES 2 was actually quite easy since glSOAR just needs to output textured triangles with no fancier stuff. GLES GLSL shaders are a little different than regular OpenGL shaders. For instance, there are no predefined variables like gl_Vertex, gl_TexCoord, gl_ModelViewProjectionMatrix, etc. and there is only gl_Position and gl_FragColor for setting the results. Vertex positions, texture coordinates, and so on are passed to shader as attributes and transformation matrices as uniforms. Fortunately, GLES-style shaders work in desktop OpenGL without problems.

Some additional GLES findings

Good listing of supported GLES extensions for different phone and tablet models can be found at GLBenchmark Results (select model and then look at GL config tab).

Texture Compression

Some form of texture compression is supported by nearly all (if not all) current mobile GPUs. On desktop, it’s easy now and has been for many years. We have S3TC/DXTC (supported by GPUs for ages), its variant ATI 3Dc (uses alpha channel coding scheme from DXT5, supported by all DirectX 10 GPUs and older too), and recent addition BC6/BC7 formats by DirectX 11 class GPUs.

Unfortunately, it is not so easy in GLES and mobile GPU world. The problem is that each vendor can support completely different compressed formats. Only certainty is that GLES 2.0 capable GPU supports ETC1 (Ericsson TC) compression (no alpha channel). As far as Android is concerned, ~90% devices have GLES 2 GPU (as of Oct 2012). Additionally, S3TC is supported by Nvidia Tegra, PVRTC by PowerVR, ATI-TC/ATC by Andreno.

New ETC2 compression looks interesting though. It is part of the core of new OpenGL 4.3 as well as GLES 3.0. On desktop, it should be available for all DirectX 11 class GPUs (when the drivers arrive). The quality is supposedly better than S3TC and it has none of its patenting issues.

Anyway, for new glSOAR it looks like ETC1 for Android target and S3TC for desktop. Most probably in KTX (Khronos Texture).So that means writing KTX loader in Java and probably some ETC1 and KTX stuff for Vampyre Imaging Library too.

Some tools: etcpack tool from Ericsson for ETC1/ETC2 compression (outputs KTX files), etc1tool for ETC1 is part of Android SDK, and ATI Compressonator can compress ETC1, S3TC/DXTC, 3Dc, and ATI-TC.

NPOT Textures

Non-power-of-two textures have been supported by desktop GPUs for quite some time (at least all DirectX 10 capable GPUs have full support – not sure how “full” it is for example on Intel iGPUs). GLES 2 has limited support for NPOT textures (no mipmaps, limited texture wrapping modes, etc.) and with GL_OES_texture_npot extension you get full support for NPOT textures in GLES.

Current Status and Near Future

Now I have just a grid terrain rendering with no LOD done. Some parts of the SOAR code are translated to Java already. I’m quite worried about the SOAR LOD performace on Android though. Firstly, index buffer is rebuilt each frame (and can be quite big) and secondly, mesh refinement uses a lot of floating point calculations. Also the memory limit for Android apps is quite low for larger terrains (20 bytes are needed per vertex for SOAR plus a lot of 4-byte indices).

Well, if it is unusably slow, I can always abandon SOAR and do a Geomipmapping demo instead (that’s the plan for later anyway :)).

glSOAR Android Test running on Nexus S

Porting glSOAR to Android and OpenGL ES, Part 1

A few weeks ago I had some sort of a urge to try game and/or graphics development on Android. I’ve been doing some Android application development lately so I had all the needed tools setup already. However, what I wanted was some cross-platform engine or framework so that I could develop mostly on desktop. Working with emulators and physical devices is tolerable when doing regular app development and of course you want to use UI framework native for the platform. For games or other hardware stressing apps I really wouldn’t like to work like that. Emulator is useless as it doesn’t support OpenGL ES 2.0 and is generally very slow. And it’s no pleasure with physical device either, grabbing it all the time off the table, apk upload and installation takes time, etc. So I started looking for some solutions.

The Search

Initially, I considered using something like Oxynege. It uses Object Pascal syntax, can compile to .NET bytecode and also Java bytecode (and use .NET and Java libs). However, I would still need to use some framework for graphics, input, etc. so I would be locked in .NET or Java world anyway. I’m somewhat inclined to pick up Oxygene in the future for something (there’s also native OSX and iOS target in the works) but not right now. Also, it’s certainly not cheap for just a few test programs. Then there is Xamarin – Mono for Android and iOS (MonoGame should work here?) but even more expensive. When you look Android game engines and frameworks specifically you end up with things like AndEngine, Cocos2D-X, and libGDX.

After a little bit of research, I have settled for libGDX. It is a Java framework with some native parts (for tasks where JVM performance maybe inadequate) currently targeting desktop (Win/OSX/Linux), Android, and HTML5 (Javascript + WebGL in fact). Graphics are based on OpenGL (desktop) and OpenGL ES (Android and web). Great thing is that you can do most of the development on desktop Java version with faster debugging and hot swap code. One can also use OpenGL commands directly which is a must (at least for me).

The Test

I wanted to do some test program first and I decided to convert glSOAR terrain renderer to Java and OpenGL ES (GLES). The original is written in Object Pascal using desktop OpenGL. The plan was to convert it to Java and OpenGL ES with as little modifications as possible. At least for 3D graphics, libGDX is relatively thin layer on top of GLES. You have some support classes like texture, camera, vertex buffer, etc. but you still need to know what’s a projection matrix, bind resources before rendering, etc. Following is the listing of a few ideas, tasks, problems, etc. during the conversion (focusing on Java and libGDX).

Project Setup

Using Eclipse, I created the projects structure for libGDX based projects. It is quite neat actually, you have a main Java project with all the shared platform independent code and then one project for each platform (desktop, Android, …). Each platform project has just a few lines of starter code that instantiates your shared main project for the respective platform (of course you can do more here).

Two problems here though. Firstly, changes in main project won’t trigger rebuild of the Android project so you have to trigger it manually (like by adding/deleting empty line in Android project code) before running on Android. This is actually a bug in ADT Eclipse plugin version 20 so hopefully it will be fixed for v21 (you star this issue).

Second issue is asset management but that is easy to fix. I want to use the same textures and other assets in version for each platform so they should be located in some directory shared between all projects (like inside main project, or few dir levels up). The thing is that for Android all assets are required to be located in assets subdirectory of Android project. Recommended solution for libGDX here is to store the assets in Android project and create links (link folder or link source) in Eclipse for desktop project pointing to Android assets. I didn’t really like that. I want my stuff to be where I want it to be so I created some file system links instead. I used mklink command in Windows to create junctions (as Total Commander wouldn’t follow symlinks properly):

d:\Projects\Tests\glSoarJava> mklink /J glSoarAndroid\assets\data Data
Junction created for glSoarAndroid\assets\data <> Data
d:\Projects\Tests\glSoarJava> mklink /J glSoarDesktop\data Data
Junction created for glSoarDesktop\data <> Data

Now I have shared Data folder at the same level as project folders. In future though, I guess there will be some platform specific assets needed (like different texture compressions, sigh).

Java

Too bad Java does not have any user defined value types (like struct in C#). I did split TVertex type into three float arrays. One for 3D position (at least value type vectors would be nice, vertices[idx * 3 + 1] instead of vertices[idx].y now, which is more readable?) and rest for SOAR LOD related parameters. Making TVertex a class and spawning millions of instances didn’t seem like a good idea even before I began. It would be impossible to pass positions to OpenGL anyway.

Things like data for VBOs, vertex arrays, etc. are passed to OpenGL in Java using descendants of Buffer (direct) class. Even stuff like generating IDs with glGen[Textures|Buffers|…] and well basically everything that takes pointer-to-memory parameters. Of course, it makes sense in environment where you cannot just touch the memory as you want. Still it is kind of annoying for someone not used to that. At least libGDX comes with some buffer utils, including fast copying trough native JNI code.

Performance

Roots of SOAR terrain rendering method are quite old today and come from the times when it was okay to spend CPU time to limit the number of triangles sent to GPUs as much as possible. That has been a complete opposite of the situation on PCs for a long time (if did not have old integrated GPU that is). I guess today that is true for hardware in mobile devices as well (at least the new ones). And there will also be some Java/Dalvik overhead…

Anyway, this is just an exercise so it the end result may very well be useless and the experience gained during development is what counts.

OpenGL >> GLES

Continue reading to Porting glSOAR to Android and OpenGL ES, Part 2 which focuses on OpenGL to OpenGL ES transition.