Deskew CLI Tool v1.30 Released

New version of Deskew command line tool is ready. You can find general info about Deskew here Deskew Tools or check out README.

Main improvement for this version is better quality of rotated images. They're less blurry with default filtering, especially when rotated by less than one degree. You can now also select other filters, the choices are: nearest (no filtering, very fast), bilinear - default, bicubic, and Lanczos (overal best quality, pretty slow).
Deskew Rotation Quality Comparison

Change list for v1.30

  • fix #15: Better image quality after rotation - better default and also selectable nearest|linear|cubic|lanczos filtering
  • fix #5: Detect skew angle only (no rotation done) - optionally only skew detection
  • fix #17: Optional auto-crop after rotation
  • fix #3: Command line option to set output compression - now for TIFF and JPEG
  • fix #12: Bad behavior when an output is given and no deskewing is needed
  • libtiff in macOS is now picked up also when binaries are put directly in the directory with deskew
  • text output is flushed after every write (Linux/Unix): it used to be flushed only when writing to device but not file/pipe.

Downloads

  Deskew v1.30
» 4.3 MiB - 19,797 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

GitHub Release

GUI Frontend for Deskew

I've created a simple GUI frontend for Deskew. Now it's easier to process many files without writing shell scripts. It needs the command line tool which is called for the each input file. You can set the basic and most of the advanced options for deskewing in the GUI.

Prebuilt executables for Windows and Linux are available in the download - you just place them to the same folder as the command line tool. Version for macOS is a bit more convenient - it's a self-contained app bundle with CLI tool already inside and all placed in DMG image. You can also set the explicit path to the command line tool in the program itself.

The GUI is written in Lazarus so it may not be a best native-looking application out there but it saved me some time - there wouldn't be any GUI if it would be a big time sink.

Download

  DeskewGui v0.90
» 4.1 MiB - 5,816 hits - March 18, 2019
GUI frontend for Deskew command line tool. Prebuilt binaries for Windows, macOS, and Linux. Windows and Linux versions need Deskew command line tool binaries.

Remember that for Windows and Linux you also need Deskew command line tool if you don't have it already:

  Deskew v1.30
» 4.3 MiB - 19,797 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

Screenshots

Basic options and files to deskew in Deskew GUI in Windows

Advanced options in Deskew GUI in macOS

Deskewing in progress in Deskew GUI in Windows

Output of the command line tool in Deskew GUI in Linux

Bug Reports And Source Code

GUI is in the same repository as the command line tool, you can find the links here Deskew Tools.

Deskew Tool v1.25 Released

New version of Deskew command line tool is ready. You can find general info about Deskew here Deskew Tools.

Change List for Deskew 1.25

  • fixed issue #6: Preserve DPI measurement system (TIFF)
  • fixed issue #4: Output image not saved in requested format (when deskewing is skipped)
  • dynamic loading of libtiff library - adds TIFF support in macOS when libtiff is installed
  • fixed issue #8: Cannot compile in Free Pascal 3.0+ (Windows) - Fails to link precompiled LibTiff library
  • fixed issue #7: Windows FPC build fails with Access violation exception when loading certain TIFFs (especially those saved by Windows Photo Viewer etc.)
  • Linux ARM build is now also included in the release

Download

  Deskew v1.30
» 4.3 MiB - 19,797 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

Deskew Tool v1.20 Released

New version of Deskew command line tool is ready. You can find general info about Deskew here Deskew Tools.

Change List for Deskew 1.20

  • much faster rotation, especially when background color is set (>2x faster, 2x less memory)
  • can skip deskewing step if detected skew angle is lower than parameter (possible speedup when processing large batches)
  • new option for timing of individual steps
  • fix: crash when last row of page is classified as text
  • misc: default back color is now opaque black, new forced output format "rgb24",
    background color can define also alpha channel, nicer formatting of text output

Download

  Deskew v1.30
» 4.3 MiB - 19,797 hits - June 19, 2019
Command line tool for deskewing scanned documents. Binaries for several platforms, test images, and Object Pascal source code included.

Multilevel Geomipmapping Program + Sources Released

There's been a few requests for source code of Multilevel Geomipmapping terrain rendering. So I'm doing this now, finally. It has not been touched since 2008 but it compiles fine in current version of Lazarus. I tested it in Windows only but in 2008 it had run in Linux and FreeBSD as well. Unfortunately, not all the test terrain data could be included because of their massive size.

You can find more info in included Readme and previously linked article. Note: release archive is in 7z format to get smaller download size.

  Multilevel Geomipmapping
» 92.1 MiB - 96,610 hits - May 5, 2014
Terrain renderer using OpenGL. Includes Object Pascal source code, binaries, and test data.

  • Small terrain 2k x 2k
  • Multilevel tree nodes and wireframe display

Deskew Tool Version 1.10

New version of Deskew command line tool is ready. You can find general info about Deskew here Deskew Tools.

Change List for Deskew 1.10

  • TIFF support now also for Win64 and 32/64bit Linux platforms
  • forced output formats
  • fix: output file names were always lowercase
  • fix: preserves resolution metadata (e.g. 300dpi) of input when writing output

Continue reading

Android Terrain Rendering: Vertex Texture Fetch, Part 1

To my surprise, I found out that GPU (PowerVR SGX 540) in my venerable Nexus S (2010) supports vertex texture fetch (VTF). That is, accessing texture pixels in vertex shader -- a very useful feature for terrain rendering. About a year ago, when I started investigating terrain rendering on Android devices, I did some searching for VTF support on Android devices and figured out it's not there yet (similar to situation years ago when desktop OpenGL 2.0 was released with support for texture sampling in GLSL verter shaders but most GL implementation just reported GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS to be zero). Now I don't know how I missed it on my own phone, maybe there was some Android update with updated GPU drivers during last year? I have no idea how many other devices support it now. Hopefully, newest ones with OpenGL ES 3 support it all. I wouldn't be surprised if among GLES 2 devices only PowerVR + Android 4+ combinations supported it.

Overview

Anyway, let's focus on terrain rendering - rough outline:

  1. Put the entire heightmap into a texture.
  2. Have a small 2D grid (say 16x16 or 32x32) geometry ready for rendering terrain tiles.
  3. Build a quad tree over over the terrain. Root node covers entire terrain and each child then covers 1/4th of parent's area.
  4. Now we can start rendering, do this every frame:
    1. Traverse the quadtree starting from the root.
    2. For each child node test if the geometry grid provides sufficient detail for rendering the area covered by this node:
      • YES it does, mark this node for rendering and end traversal of this subtree.
      • NO it does not, continue traversal and test children of this node (unless we're at leaf already).
    3. Now take this list of marked nodes and render them. The same 2D grid is used to render each tile: it's scaled according to tile's covered area and its vertices are displaced by height values read from texture.
Root covers entire terrain, each child quarter of the parent's area.

Root covers entire terrain, each child quarter of the parent's area.

This is basically what I originally wanted for Multilevel Geomipmapping years ago but couldn't do in the end because of the state of VTF support on desktop at that time.

So what exactly is the benefit of VTF over let's say geomipmapping here?

Main benefit is ability to get height of the terrain at any position (and multiple times) when processing each tile vertex. In traditional geomipmapping, even if you can move tile vertices around it's no use since you have only one fixed height value available. With VTF, you can modify tile position of vertex as you like and still be able to get correct height value. This greatly simplifies tasks like connecting neighboring tiles with different levels of detail. No ugly skirts or special stitching stripes of geometry are needed as you can simply move edge points around in the vertex shader. Also geomorphing solutions begin to be usable without much work. And you can display larger terrains as well. With geomipmapping you always have to draw fixed number of tiles (visible leaves) -- a number that goes up fast when you enlarge the terrain. VTF may allow you draw fixed number of tiles -- regardless of actual terrain size (as distant tiles cover much larger area compared to geomipmap tiles with fixed area). Another one, terrain normals can be calculated inside the shaders from neighboring height values.
Finally, since heightmap is now a regular texture you get filtering and access to compressed texture formats to stuff more of the terrain to the memory.

There must be some disadvantages, right?

Sure, the support for VTF on mobile GLES 2 GPUs is scarce. So for something else than tech demo it's useless for the time being. Hopefully, all GLES 3 GPUs will support VTF. And with usable performance - VTF was uselessly slow on desktops in the beginning.

Implementation

I have added experimental VTF based terrain renderer to Terrain Rendering Demo for Android testbed and it looks promising. Stitching of the tiles works flawlessly. More work is needed on selecting nodes for rendering (render node or split to children?). Currently, there's only simple distance based metric but I want to devise something that takes classical "screen-space error" into account. And maybe some fiddling with geomorphing on top ...

Follow some of the implementation details in part 2 (soon!).

VTF Terrain Shot

Terrain Rendering Demo for Android

I finally got around to releasing Android terrain rendering demo I've been working on last few months (a few moments here and there). I did the bulk of work in November 2012, partly described in posts Porting glSOAR to Android and OpenGL ES, Part 1 and Porting glSOAR to Android and OpenGL ES, Part 2 – third part is still just a draft 🙁

Anyway, here's the current version which supports Geomipmapping and SOAR terrain rendering methods. Gritty details about the internals will follow in some future post. There is also a nearly identical desktop version for reference, advantage of using LibGDX for this.

Terrain in action

Downloads and Installs

Google Play Store for Android Version

  glTerrainJava for Desktop v0.30
» 15.3 MiB - 3,182 hits - July 4, 2013
Desktop version of Java terrain rendering demo.

Sources: https://github.com/galfar/glTerrainJava

Controls

When the demo starts you get to the main menu screen. Here you can select terrain LOD method and some parameters. Important one is "tolerance in pixels" which controls when part of terrain switches to coarser representation. Basically,  lower tolerance = better quality = lower performance.

On Android, just check "autowalk" in menu and later swipe finger on display to look around and change direction. Better/more controls are in todo list. On desktop, you can also use these keys when viewing the terrain: W/Up - forwards, S/Down - backwards, Ctrl - move really fast, +/- change tolerance, O - toggle wireframe overlay.

More Screens

Settings menu

Wireframe overlay

Future

  • Benchmark mode - terrain flyover
  • Some instructions inside
  • Controls for walking over the terrain on keyboard-less devices
  • Geomipmap tiles without skirts
  • LOD method using vertex texture fetch, will it actually run on any phone?
  • multithreading for mesh refinement

Limitations

Needs at least 2048x2048 max texture size (4096 on desktop as more detailed texture is used), GL_OES_element_index_uint for SOAR, and GL_OES_standard_derivatives for wireframe overlay. For instance, SOAR won't run on Galaxy S3 with Mali-400 MP GPU. Also, wireframe overlay is only for Geomipmapping (uses barycentric coordinates for wireframe).

Porting glSOAR to Android and OpenGL ES, Part 2

Post to Post Links II error: No post found with slug "1048" was more about used libraries and Java. Now part 2 tells a story of transition from OpenGL to OpenGL ES for glSOAR terrain renderer.

glSOAR OpenGL ES Gotchas

Initially, I wanted to just use fixed pipeline because that's what desktop glSOAR uses (remember, original SOAR is from 2001). So that meant using GLES 1.0/1.1 since GLES 2.0 removed all the fixed pipeline stuff (matrix settings, lighting, immediate mode, and so on). To quote the official GLES docs:

Note: Be careful not to mix OpenGL ES 1.x API calls with OpenGL ES 2.0 methods! The two APIs are not interchangeable and trying to use them together only results in frustration and sadness.

I have to admit I didn't really check what features GLES actually has, somehow assuming that it would be on par with regular OpenGL (1.x core or some 2.0). Here's the list of few problems I encountered during the conversion:

  1. There's no automatic texture coordinate generation. Desktop glSOAR uses OpenGL to generate texture coordinates for terrain mesh (by the means of glTexGen) to save memory. Fortunately, simple workaround is possible by setting texture transformation matrix directly (details at fernlightning). Of course, when using GLES 2 you can just generate coordinates in the shader.
  2. No wireframe display in GLES! There's no glPolygonMode so you only get filled triangles. Desktop glSOAR can display wireframe overlay over textured terrain to show off cLOD in action by drawing terrain in additional pass with polygon mode set to GL_LINE and using glPolygonOffset. Now in GLES, I could try rendering the terrain as GL_LINES instead of GL_TRIANGLES. That kind of works so far (getting wire quads instead triangles though) for simple terrain grid but it will probably break when cLOD is implemented.
  3. Then I hit the show stopper, at least for GLES 1.0/1.1. There's no support for 32-bit indices (GL_UNSIGNED_INT enum for glDrawElements) in GLES core. And 16-bit indices are any good only for terrains with size of 129x129 and smaller (as with SOAR it just cannot be simply split to smaller chunks). Fortunately, there is a GLES extension that allows usage of 32-bit indices called GL_OES_element_index_uint. I've seen on GLBenchmark page that my phone and many others (at least those with Andreno and PowerVR GPUs) support it but my test program insisted otherwise. As it turned out, it's only supported with GLES 2 context. So it was a goodbye to GLES 1 and fixed function pipeline...
  4. The move to GLES 2 was actually quite easy since glSOAR just needs to output textured triangles with no fancier stuff. GLES GLSL shaders are a little different than regular OpenGL shaders. For instance, there are no predefined variables like gl_Vertex, gl_TexCoord, gl_ModelViewProjectionMatrix, etc. and there is only gl_Position and gl_FragColor for setting the results. Vertex positions, texture coordinates, and so on are passed to shader as attributes and transformation matrices as uniforms. Fortunately, GLES-style shaders work in desktop OpenGL without problems.

Some additional GLES findings

Good listing of supported GLES extensions for different phone and tablet models can be found at GLBenchmark Results (select model and then look at GL config tab).

Texture Compression

Some form of texture compression is supported by nearly all (if not all) current mobile GPUs. On desktop, it's easy now and has been for many years. We have S3TC/DXTC (supported by GPUs for ages), its variant ATI 3Dc (uses alpha channel coding scheme from DXT5, supported by all DirectX 10 GPUs and older too), and recent addition BC6/BC7 formats by DirectX 11 class GPUs.

Unfortunately, it is not so easy in GLES and mobile GPU world. The problem is that each vendor can support completely different compressed formats. Only certainty is that GLES 2.0 capable GPU supports ETC1 (Ericsson TC) compression (no alpha channel). As far as Android is concerned, ~90% devices have GLES 2 GPU (as of Oct 2012). Additionally, S3TC is supported by Nvidia Tegra, PVRTC by PowerVR, ATI-TC/ATC by Andreno.

New ETC2 compression looks interesting though. It is part of the core of new OpenGL 4.3 as well as GLES 3.0. On desktop, it should be available for all DirectX 11 class GPUs (when the drivers arrive). The quality is supposedly better than S3TC and it has none of its patenting issues.

Anyway, for new glSOAR it looks like ETC1 for Android target and S3TC for desktop. Most probably in KTX (Khronos Texture).So that means writing KTX loader in Java and probably some ETC1 and KTX stuff for Vampyre Imaging Library too.

Some tools: etcpack tool from Ericsson for ETC1/ETC2 compression (outputs KTX files), etc1tool for ETC1 is part of Android SDK, and ATI Compressonator can compress ETC1, S3TC/DXTC, 3Dc, and ATI-TC.

NPOT Textures

Non-power-of-two textures have been supported by desktop GPUs for quite some time (at least all DirectX 10 capable GPUs have full support - not sure how "full" it is for example on Intel iGPUs). GLES 2 has limited support for NPOT textures (no mipmaps, limited texture wrapping modes, etc.) and with GL_OES_texture_npot extension you get full support for NPOT textures in GLES.

Current Status and Near Future

Now I have just a grid terrain rendering with no LOD done. Some parts of the SOAR code are translated to Java already. I'm quite worried about the SOAR LOD performace on Android though. Firstly, index buffer is rebuilt each frame (and can be quite big) and secondly, mesh refinement uses a lot of floating point calculations. Also the memory limit for Android apps is quite low for larger terrains (20 bytes are needed per vertex for SOAR plus a lot of 4-byte indices).

Well, if it is unusably slow, I can always abandon SOAR and do a Geomipmapping demo instead (that's the plan for later anyway :)).

glSOAR Android Test running on Nexus S

Porting glSOAR to Android and OpenGL ES, Part 1

A few weeks ago I had some sort of a urge to try game and/or graphics development on Android. I've been doing some Android application development lately so I had all the needed tools setup already. However, what I wanted was some cross-platform engine or framework so that I could develop mostly on desktop. Working with emulators and physical devices is tolerable when doing regular app development and of course you want to use UI framework native for the platform. For games or other hardware stressing apps I really wouldn't like to work like that. Emulator is useless as it doesn't support OpenGL ES 2.0 and is generally very slow. And it's no pleasure with physical device either, grabbing it all the time off the table, apk upload and installation takes time, etc. So I started looking for some solutions.

The Search

Initially, I considered using something like Oxynege. It uses Object Pascal syntax, can compile to .NET bytecode and also Java bytecode (and use .NET and Java libs). However, I would still need to use some framework for graphics, input, etc. so I would be locked in .NET or Java world anyway. I'm somewhat inclined to pick up Oxygene in the future for something (there's also native OSX and iOS target in the works) but not right now. Also, it's certainly not cheap for just a few test programs. Then there is Xamarin - Mono for Android and iOS (MonoGame should work here?) but even more expensive. When you look Android game engines and frameworks specifically you end up with things like AndEngine, Cocos2D-X, and libGDX.

After a little bit of research, I have settled for libGDX. It is a Java framework with some native parts (for tasks where JVM performance maybe inadequate) currently targeting desktop (Win/OSX/Linux), Android, and HTML5 (Javascript + WebGL in fact). Graphics are based on OpenGL (desktop) and OpenGL ES (Android and web). Great thing is that you can do most of the development on desktop Java version with faster debugging and hot swap code. One can also use OpenGL commands directly which is a must (at least for me).

The Test

I wanted to do some test program first and I decided to convert glSOAR terrain renderer to Java and OpenGL ES (GLES). The original is written in Object Pascal using desktop OpenGL. The plan was to convert it to Java and OpenGL ES with as little modifications as possible. At least for 3D graphics, libGDX is relatively thin layer on top of GLES. You have some support classes like texture, camera, vertex buffer, etc. but you still need to know what's a projection matrix, bind resources before rendering, etc. Following is the listing of a few ideas, tasks, problems, etc. during the conversion (focusing on Java and libGDX).

Project Setup

Using Eclipse, I created the projects structure for libGDX based projects. It is quite neat actually, you have a main Java project with all the shared platform independent code and then one project for each platform (desktop, Android, ...). Each platform project has just a few lines of starter code that instantiates your shared main project for the respective platform (of course you can do more here).

Two problems here though. Firstly, changes in main project won't trigger rebuild of the Android project so you have to trigger it manually (like by adding/deleting empty line in Android project code) before running on Android. This is actually a bug in ADT Eclipse plugin version 20 so hopefully it will be fixed for v21 (you star this issue).

Second issue is asset management but that is easy to fix. I want to use the same textures and other assets in version for each platform so they should be located in some directory shared between all projects (like inside main project, or few dir levels up). The thing is that for Android all assets are required to be located in assets subdirectory of Android project. Recommended solution for libGDX here is to store the assets in Android project and create links (link folder or link source) in Eclipse for desktop project pointing to Android assets. I didn't really like that. I want my stuff to be where I want it to be so I created some file system links instead. I used mklink command in Windows to create junctions (as Total Commander wouldn't follow symlinks properly):

d:\Projects\Tests\glSoarJava> mklink /J glSoarAndroid\assets\data Data
Junction created for glSoarAndroid\assets\data <> Data
d:\Projects\Tests\glSoarJava> mklink /J glSoarDesktop\data Data
Junction created for glSoarDesktop\data <> Data

Now I have shared Data folder at the same level as project folders. In future though, I guess there will be some platform specific assets needed (like different texture compressions, sigh).

Java

Too bad Java does not have any user defined value types (like struct in C#). I did split TVertex type into three float arrays. One for 3D position (at least value type vectors would be nice, vertices[idx * 3 + 1] instead of vertices[idx].y now, which is more readable?) and rest for SOAR LOD related parameters. Making TVertex a class and spawning millions of instances didn't seem like a good idea even before I began. It would be impossible to pass positions to OpenGL anyway.

Things like data for VBOs, vertex arrays, etc. are passed to OpenGL in Java using descendants of Buffer (direct) class. Even stuff like generating IDs with glGen[Textures|Buffers|...] and well basically everything that takes pointer-to-memory parameters. Of course, it makes sense in environment where you cannot just touch the memory as you want. Still it is kind of annoying for someone not used to that. At least libGDX comes with some buffer utils, including fast copying trough native JNI code.

Performance

Roots of SOAR terrain rendering method are quite old today and come from the times when it was okay to spend CPU time to limit the number of triangles sent to GPUs as much as possible. That has been a complete opposite of the situation on PCs for a long time (if did not have old integrated GPU that is). I guess today that is true for hardware in mobile devices as well (at least the new ones). And there will also be some Java/Dalvik overhead...

Anyway, this is just an exercise so it the end result may very well be useless and the experience gained during development is what counts.

OpenGL >> GLES

Continue reading to Post to Post Links II error: No post found with slug "1030" which focuses on OpenGL to OpenGL ES transition.