Porting glSOAR to Android and OpenGL ES, Part 2

Post to Post Links II error: No post found with slug "1048" was more about used libraries and Java. Now part 2 tells a story of transition from OpenGL to OpenGL ES for glSOAR terrain renderer.

glSOAR OpenGL ES Gotchas

Initially, I wanted to just use fixed pipeline because that's what desktop glSOAR uses (remember, original SOAR is from 2001). So that meant using GLES 1.0/1.1 since GLES 2.0 removed all the fixed pipeline stuff (matrix settings, lighting, immediate mode, and so on). To quote the official GLES docs:

Note: Be careful not to mix OpenGL ES 1.x API calls with OpenGL ES 2.0 methods! The two APIs are not interchangeable and trying to use them together only results in frustration and sadness.

I have to admit I didn't really check what features GLES actually has, somehow assuming that it would be on par with regular OpenGL (1.x core or some 2.0). Here's the list of few problems I encountered during the conversion:

  1. There's no automatic texture coordinate generation. Desktop glSOAR uses OpenGL to generate texture coordinates for terrain mesh (by the means of glTexGen) to save memory. Fortunately, simple workaround is possible by setting texture transformation matrix directly (details at fernlightning). Of course, when using GLES 2 you can just generate coordinates in the shader.
  2. No wireframe display in GLES! There's no glPolygonMode so you only get filled triangles. Desktop glSOAR can display wireframe overlay over textured terrain to show off cLOD in action by drawing terrain in additional pass with polygon mode set to GL_LINE and using glPolygonOffset. Now in GLES, I could try rendering the terrain as GL_LINES instead of GL_TRIANGLES. That kind of works so far (getting wire quads instead triangles though) for simple terrain grid but it will probably break when cLOD is implemented.
  3. Then I hit the show stopper, at least for GLES 1.0/1.1. There's no support for 32-bit indices (GL_UNSIGNED_INT enum for glDrawElements) in GLES core. And 16-bit indices are any good only for terrains with size of 129x129 and smaller (as with SOAR it just cannot be simply split to smaller chunks). Fortunately, there is a GLES extension that allows usage of 32-bit indices called GL_OES_element_index_uint. I've seen on GLBenchmark page that my phone and many others (at least those with Andreno and PowerVR GPUs) support it but my test program insisted otherwise. As it turned out, it's only supported with GLES 2 context. So it was a goodbye to GLES 1 and fixed function pipeline...
  4. The move to GLES 2 was actually quite easy since glSOAR just needs to output textured triangles with no fancier stuff. GLES GLSL shaders are a little different than regular OpenGL shaders. For instance, there are no predefined variables like gl_Vertex, gl_TexCoord, gl_ModelViewProjectionMatrix, etc. and there is only gl_Position and gl_FragColor for setting the results. Vertex positions, texture coordinates, and so on are passed to shader as attributes and transformation matrices as uniforms. Fortunately, GLES-style shaders work in desktop OpenGL without problems.

Some additional GLES findings

Good listing of supported GLES extensions for different phone and tablet models can be found at GLBenchmark Results (select model and then look at GL config tab).

Texture Compression

Some form of texture compression is supported by nearly all (if not all) current mobile GPUs. On desktop, it's easy now and has been for many years. We have S3TC/DXTC (supported by GPUs for ages), its variant ATI 3Dc (uses alpha channel coding scheme from DXT5, supported by all DirectX 10 GPUs and older too), and recent addition BC6/BC7 formats by DirectX 11 class GPUs.

Unfortunately, it is not so easy in GLES and mobile GPU world. The problem is that each vendor can support completely different compressed formats. Only certainty is that GLES 2.0 capable GPU supports ETC1 (Ericsson TC) compression (no alpha channel). As far as Android is concerned, ~90% devices have GLES 2 GPU (as of Oct 2012). Additionally, S3TC is supported by Nvidia Tegra, PVRTC by PowerVR, ATI-TC/ATC by Andreno.

New ETC2 compression looks interesting though. It is part of the core of new OpenGL 4.3 as well as GLES 3.0. On desktop, it should be available for all DirectX 11 class GPUs (when the drivers arrive). The quality is supposedly better than S3TC and it has none of its patenting issues.

Anyway, for new glSOAR it looks like ETC1 for Android target and S3TC for desktop. Most probably in KTX (Khronos Texture).So that means writing KTX loader in Java and probably some ETC1 and KTX stuff for Vampyre Imaging Library too.

Some tools: etcpack tool from Ericsson for ETC1/ETC2 compression (outputs KTX files), etc1tool for ETC1 is part of Android SDK, and ATI Compressonator can compress ETC1, S3TC/DXTC, 3Dc, and ATI-TC.

NPOT Textures

Non-power-of-two textures have been supported by desktop GPUs for quite some time (at least all DirectX 10 capable GPUs have full support - not sure how "full" it is for example on Intel iGPUs). GLES 2 has limited support for NPOT textures (no mipmaps, limited texture wrapping modes, etc.) and with GL_OES_texture_npot extension you get full support for NPOT textures in GLES.

Current Status and Near Future

Now I have just a grid terrain rendering with no LOD done. Some parts of the SOAR code are translated to Java already. I'm quite worried about the SOAR LOD performace on Android though. Firstly, index buffer is rebuilt each frame (and can be quite big) and secondly, mesh refinement uses a lot of floating point calculations. Also the memory limit for Android apps is quite low for larger terrains (20 bytes are needed per vertex for SOAR plus a lot of 4-byte indices).

Well, if it is unusably slow, I can always abandon SOAR and do a Geomipmapping demo instead (that's the plan for later anyway :)).

glSOAR Android Test running on Nexus S

Porting glSOAR to Android and OpenGL ES, Part 1

A few weeks ago I had some sort of a urge to try game and/or graphics development on Android. I've been doing some Android application development lately so I had all the needed tools setup already. However, what I wanted was some cross-platform engine or framework so that I could develop mostly on desktop. Working with emulators and physical devices is tolerable when doing regular app development and of course you want to use UI framework native for the platform. For games or other hardware stressing apps I really wouldn't like to work like that. Emulator is useless as it doesn't support OpenGL ES 2.0 and is generally very slow. And it's no pleasure with physical device either, grabbing it all the time off the table, apk upload and installation takes time, etc. So I started looking for some solutions.

The Search

Initially, I considered using something like Oxynege. It uses Object Pascal syntax, can compile to .NET bytecode and also Java bytecode (and use .NET and Java libs). However, I would still need to use some framework for graphics, input, etc. so I would be locked in .NET or Java world anyway. I'm somewhat inclined to pick up Oxygene in the future for something (there's also native OSX and iOS target in the works) but not right now. Also, it's certainly not cheap for just a few test programs. Then there is Xamarin - Mono for Android and iOS (MonoGame should work here?) but even more expensive. When you look Android game engines and frameworks specifically you end up with things like AndEngine, Cocos2D-X, and libGDX.

After a little bit of research, I have settled for libGDX. It is a Java framework with some native parts (for tasks where JVM performance maybe inadequate) currently targeting desktop (Win/OSX/Linux), Android, and HTML5 (Javascript + WebGL in fact). Graphics are based on OpenGL (desktop) and OpenGL ES (Android and web). Great thing is that you can do most of the development on desktop Java version with faster debugging and hot swap code. One can also use OpenGL commands directly which is a must (at least for me).

The Test

I wanted to do some test program first and I decided to convert glSOAR terrain renderer to Java and OpenGL ES (GLES). The original is written in Object Pascal using desktop OpenGL. The plan was to convert it to Java and OpenGL ES with as little modifications as possible. At least for 3D graphics, libGDX is relatively thin layer on top of GLES. You have some support classes like texture, camera, vertex buffer, etc. but you still need to know what's a projection matrix, bind resources before rendering, etc. Following is the listing of a few ideas, tasks, problems, etc. during the conversion (focusing on Java and libGDX).

Project Setup

Using Eclipse, I created the projects structure for libGDX based projects. It is quite neat actually, you have a main Java project with all the shared platform independent code and then one project for each platform (desktop, Android, ...). Each platform project has just a few lines of starter code that instantiates your shared main project for the respective platform (of course you can do more here).

Two problems here though. Firstly, changes in main project won't trigger rebuild of the Android project so you have to trigger it manually (like by adding/deleting empty line in Android project code) before running on Android. This is actually a bug in ADT Eclipse plugin version 20 so hopefully it will be fixed for v21 (you star this issue).

Second issue is asset management but that is easy to fix. I want to use the same textures and other assets in version for each platform so they should be located in some directory shared between all projects (like inside main project, or few dir levels up). The thing is that for Android all assets are required to be located in assets subdirectory of Android project. Recommended solution for libGDX here is to store the assets in Android project and create links (link folder or link source) in Eclipse for desktop project pointing to Android assets. I didn't really like that. I want my stuff to be where I want it to be so I created some file system links instead. I used mklink command in Windows to create junctions (as Total Commander wouldn't follow symlinks properly):

d:\Projects\Tests\glSoarJava> mklink /J glSoarAndroid\assets\data Data
Junction created for glSoarAndroid\assets\data <> Data
d:\Projects\Tests\glSoarJava> mklink /J glSoarDesktop\data Data
Junction created for glSoarDesktop\data <> Data

Now I have shared Data folder at the same level as project folders. In future though, I guess there will be some platform specific assets needed (like different texture compressions, sigh).

Java

Too bad Java does not have any user defined value types (like struct in C#). I did split TVertex type into three float arrays. One for 3D position (at least value type vectors would be nice, vertices[idx * 3 + 1] instead of vertices[idx].y now, which is more readable?) and rest for SOAR LOD related parameters. Making TVertex a class and spawning millions of instances didn't seem like a good idea even before I began. It would be impossible to pass positions to OpenGL anyway.

Things like data for VBOs, vertex arrays, etc. are passed to OpenGL in Java using descendants of Buffer (direct) class. Even stuff like generating IDs with glGen[Textures|Buffers|...] and well basically everything that takes pointer-to-memory parameters. Of course, it makes sense in environment where you cannot just touch the memory as you want. Still it is kind of annoying for someone not used to that. At least libGDX comes with some buffer utils, including fast copying trough native JNI code.

Performance

Roots of SOAR terrain rendering method are quite old today and come from the times when it was okay to spend CPU time to limit the number of triangles sent to GPUs as much as possible. That has been a complete opposite of the situation on PCs for a long time (if did not have old integrated GPU that is). I guess today that is true for hardware in mobile devices as well (at least the new ones). And there will also be some Java/Dalvik overhead...

Anyway, this is just an exercise so it the end result may very well be useless and the experience gained during development is what counts.

OpenGL >> GLES

Continue reading to Post to Post Links II error: No post found with slug "1030" which focuses on OpenGL to OpenGL ES transition.

Deskew Tool Updated

There is a new version of Deskew command line tool introduced in post Deskewing Scanned Documents. Looks like quite a few people found it useful 🙂

What's new in the latest version?

  • Background color can be defined (empty space around the original page after the rotation is filled with this color)
  • "Area of interest" rectangle to force skew detection only into selected part of the page (useful when  e.g. noisy page borders or images confuse skew detection when processing the entire page)
  • 64 bit and Mac OSX support
  • PSD and TIFF file format support (TIFF only in Win32 for now, sorry)
  • Display of skew detection stats and program parameters

Download

  Deskew v1.30
» 4.3 MiB - 19,855 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

Source Code Repository

Public Mercurial source repository of Deskew is now hosted at BitBucket: https://bitbucket.org/galfar/app-deskew.

Photo Sorting Tool

Let's say you have just spent few weeks in some exotic country with a bunch of friends. Everyone had a digital camera and made thousands of photos. Now you have several directories with hordes of oddly named photos and you just want see all of them in the order they were taken. Throwing them all in one folder and setting file ordering by date might help but everyone's camera can have a different internal time. File system dates can also be lost on their way to you (FTP upload etc.).

Being a programmer, rather that searching for some program on the Internet, I wrote my own quick and dirty command line tool for this task (mostly hard-coded paths etc.) in mid 2010. This year, I added GUI (where all the settings can be adjusted), some more date & time functions, and basically made the whole thing usable. And finally, beta release of PhotoMixer is available.
Continue reading

16bit half float in Pascal/Delphi

Floating point numbers with 16 bits of precision are used mostly in computer graphics. They are also called half precision floating point numbers (as having half the bits of single precision 32bit floats). There's one sign bit, five bit exponent, and ten bits for mantissa. Half floats are not really meant to be used for arithmetic computations due to the limited precision (and no support in common CPUs/FPUs).

Half floats first appeared in early 2000s as samples in images and textures. Floats provide higher dynamic range than what is available with regular 8bit or 16bit integer samples. On the other hand, commonly used single and double precision floats have much higher memory cost per pixel. Half floats have more reasonable memory requirements and their precision is adequate for many usages in imaging.

16bit float formats have been supported by ATI and NVidia GPUs for many years. I'm not sure about other IHVs but at least Direct3D 10 capable GPUs should all support it.

Read on if your interested how to convert between half and single precision floats (Object Pascal code).

Continue reading

Deskewing Scanned Documents

Check out updates and new versions of Deskew tool.

Some time ago I wrote a simple command line tool for deskewing scanned documents called Deskew. Technically, it's a rotation since angles are preserved and skew transformation doesn't do that. However, deskewing is commonly used term in this context.

Deskewing some smart paper

My approach is fairly common for this problem - rotation angle is first determined using Hough transform and then the image is rotated accordingly. Classical Hough transform is able identify lines in the image and it was later extended to allow detection of any arbitrary shapes.

Lines of text can be thought of as horizontal lines in the image. In a skewed scanned document all the lines will be rotated by some small angle. We can start with the equation of the line y = k · x + q. Since we're interested in the angle, we can rewrite it as y = (sin(α) / cos(α)) · x + q. Finally, we can rearrange it as y · cos(α) − x · sin(α) = d. Now every point [x, y] in the image can have infinite number of lines going through it, where each is defined by two parameters: angle α and distance from the origin d.

We want to consider lines only for certain points of input image. Ideally, that would be the base lines on which the "text is sitting". Simple way of determining these points is to check for black pixels which have white pixels just below them. Now for each of the classified points, we determine parameters α and d for all the lines that go through them. To get some finite number of lines, we calculate d for angles α from a certain range (I use angle step of 0.1 degrees). We want to find a line that intersects as many classified points as possible – an accumulator is used to store "votes" for each calculated line. For each point that is believed to be on the text base line, we add one vote for each line that intersects it. At the end, we find the top lines that have the most votes. Ideally, these are the base lines of all lines of text in the document. Finally, we get the rotation angle by averaging angle α of the top lines and rotate the whole image accordingly.

Important part is that one: "check for black pixels which have white pixels just below". What's black and white is determined by comparing value of the current pixel against some given threshold. For images where background is plain white and the text is black it's easy just to use 0.5 as the threshold. But when the background/foreground distinction is not so sharp calculating the threshold adaptively based on the current image can be very useful. Deskew supports both adaptive threshold calculation as well as specifying constant threshold as command line parameter.

Deskewing some math exercise

Implementation is written in Object Pascal and uses Imaging library for reading and writing various image file formats. There are precompiled binaries for a few platforms, others be built from sources using Free Pascal compiler. Archive also contains few test images.

  Deskew v1.30
» 4.3 MiB - 19,855 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

Ugly Images of Disabled Menu Items in Delphi

Ever used 32bit images stored in TImageList in your Delphi application? Toolbars and some other VCL controls have DisabledImages property which is automatically used to get images for disabled toolbar buttons. But what about menu components? They don't have this property and drawing of disabled images is handled by TImageList with original enabled images (TMainMenu.Images property). And the results are really abysmal. How can this be fixed?

One way is to override DoDraw method of TImageList and change the code that draws disabled images. You can do regular RGB to grayscale conversion here or let  Windows draw it for you in grayscale with nearly no work on your part. You can do this by using ImageList_DrawIndirect with ILS_SATURATE parameter. Note that this works only on Windows XP and newer and for 32bit images only. For older targets or color depths doing your own RGB->grayscale conversion is an option (good idea would probably be to cache converted grayscale images somewhere so they won't need to be converted on every draw call).

Here's the code of DoDraw method using  ILS_SATURATE:

type
// Descendant of regular TImageList
TSIImageList = class(TImageList)
protected
  procedure DoDraw(Index: Integer; Canvas: TCanvas; X, Y: Integer;
    Style: Cardinal; Enabled: Boolean = True); override;
end;

procedure TSIImageList.DoDraw(Index: Integer; Canvas: TCanvas; X, Y: Integer;
  Style: Cardinal; Enabled: Boolean);
var
  Options: TImageListDrawParams;

  function GetRGBColor(Value: TColor): Cardinal;
  begin
    Result := ColorToRGB(Value);
    case Result of
      clNone: Result := CLR_NONE;
      clDefault: Result := CLR_DEFAULT;
    end;
  end;

begin
  if Enabled or (ColorDepth <> cd32Bit) then
    inherited
  else if HandleAllocated then
  begin
    FillChar(Options, SizeOf(Options), 0);
    Options.cbSize := SizeOf(Options);
    Options.himl := Self.Handle;
    Options.i := Index;
    Options.hdcDst := Canvas.Handle;
    Options.x := X;
    Options.y := Y;
    Options.cx := 0;
    Options.cy := 0;
    Options.xBitmap := 0;
    Options.yBitmap := 0;
    Options.rgbBk := GetRGBColor(BkColor);
    Options.rgbFg := GetRGBColor(BlendColor);
    Options.fStyle := Style;
    Options.fState := ILS_SATURATE; // Grayscale for 32bit images

    ImageList_DrawIndirect(@Options);
  end;
end;

Important note: For ILS_SATURATE to work correctly source image files must be 32bit with proper alpha channel data, setting color depth of TImageList to 32bit is not enough! If you don't see any images drawn this is probably the cause: 8/24bit image is loaded from file and then inserted into 32bit TImageList. As there is no alpha channel data in source image it is drawn as fully transparent so you don't see anything.

http://msdn.microsoft.com/en-us/library/bb761537(VS.85).aspx

Shift Right: Delphi vs C

Few weeks ago I converted a little function from C language to Delphi. I kept getting completely wrong results all the time even though I was sure the C to Pascal conversion was right (it was really just few lines). After some desperate time, I just tried replacing SHR operator by normal DIV (as A SHR 1 = A DIV 2 and so on). To my surprise, I immediately got the right results. Can Delphi's (I didn't test it in Free Pascal) SHR operator behave differently than C's >> operator?

It does in fact. SHR treats its first operand as unsigned value even though it is a variable of signed type whereas >> takes the sign bit into account. In the function I converted the operand for right shift was often negative and Delphi's SHR just ignored the value of the sign bit.

A Bit of Code

int a, b1, b2;
a = -512;
b1 = a >> 1;
b2 = a / 2;

After running this C code both b1 and b2 have a value of -256.

var A, B1, B2: Integer;
A := -512;
B1 := A shr 1;
B2 := A div 2;

This Delphi code however yields different result: B2 is -256 as expected but B1 has a value of 2147483392.

A Bit of Assembler

Assembler output of C code:

Unit1.cpp.22: b1 = a >> 1;
mov eax,[ebp-$0c]
sar eax,1
mov [ebp-$10],eax
Unit1.cpp.23: b2 = a / 2;
mov edx,[ebp-$0c]
sar edx,1
jns $00401bb9
adc edx,$00
mov [ebp-$14],edx

Assembler output of Delphi code:

Unit1.pas.371: B1 := A shr 1;
mov eax,[ebp-$0c]
shr eax,1
mov [ebp-$1c],eax
Unit1.pas.373: B2 := A div 2;
mov eax,[ebp-$0c]
sar eax,1
jns $00565315
adc eax,$00
mov [ebp-$20],eax

As you can see, asm output of C and Delphi divisions is identical. What differs is asm for shift right operator. Delphi uses shr instruction whereas C uses sar instruction. The difference: shr does logical shift and sar does arithmetic one.

The SHR instruction clears the most significant bit (see Figure 6-7 in the Intel Architecture Software Developer's Manual, Volume 1); the SAR instruction sets or clears the most significant bit to correspond to the sign (most significant bit) of the original value in the destination operand.

Quoted from: http://faydoc.tripod.com/cpu/shr.htm

Working in MODULA-2

I’ve been working in Modula-2 for last few weeks. It’s a considered a dead language now, one of Pascal’s descendants designed by Niclaus Wirth. I’ve been doing some updates to quite an old project, embedded system for military usage.  I had to run Mac OS 7.5 emulator to get original 1989 compiler running. Modula-2 dialect understood by this compiler is quite strict – you cannot even mix cardinal and integer numbers without explicitly converting one of them.

First difference you may notice (compared to Pascal) is case sensitiveness of Modula-2, for instance all reserved words are upper cased. You don’t have to write so many begin-end statements though. Another C-like trait is importance of header files (definition modules) order during compiling – just change it a little bit in a large project and whole thing breaks down.

More info about Modula-2 if you’re interested: http://www.modula2.org/.

Modula-2