Tuesday, April 30, 2019

Fixing black border in your game image with ImageMagick.

Star with unintended black border
What's with that black around the star? That's because the image has "transparent black" (or in CSS notation, rgba(0, 0, 0, 0)) for all the fully transarent image and white with alpha otherwise. This is not a problem for image editing software like Photoshop but this is a problem for OpenGL especially if you use linear interpolation to resize your image.

What actually happend? If linear interpolation is enabled, the GPU will sample around the white and the "transparent black", thus will result in gray color with alpha around 0.5. This is not what you actually want as this can give your image an unintended black border, which may or may not bad for your game.

A solution for this is to modify your image to have "transparent white" instead.  Based on this answer, assume you have ImageMagick version 7 or later, I come up with this command:

magick convert input.png -channel RGB xc:"#ffffff" -clut output.png

And here's the result.
Star without black border around it.

Now what happends here is that we replace all color channel to 255 (thus result in white), but we keep the alpha values intact. The GPU then will see all the colors as white but only varying in alpha, so it will only interpolate the alpha because all the colors are white.

And if you plan to pass your image to zopflipng, make sure not to pass --lossy_transparent as that option changes all completely transparent pixel to "black transparent" again, which is the source of the problem.

UPDATE: ImageMagick command above won't work for images with various colors. I forked alpha-bleeding program which uses LodePNG to ease MSVC compilation which can be found here: https://github.com/MikuAuahDark/alpha-bleeding.

Thursday, April 25, 2019

VS2013 RTM cl.exe and "Use Unicode UTF-8 for worldwide language support"

There's a feature in Windows 10 that lets you specify UTF-8 string to C function fopen and other ANSI WinAPI functions. This makes it feel Unix-like where fopen in Unix expects it to be UTF-8 filename. However this doesn't mean everything works as expected as Microsoft warns us about that feature which may break application where it assume multi-byte length is 2 bytes max. And unfortunately this is true for VS2013 RTM cl.exe compiler.

C:\Users\MikuAuahDark>cl.exe test.c
Microsoft (R) C/C++ Optimizing Compiler Version 18.00.21005.1 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

test.c
test.c : fatal error C1001: An internal error has occurred in the compiler.
(compiler file 'f:\dd\vctools\compiler\cxxfe\sl\p1\c\p0io.c', line 2812)
 To work around this problem, try simplifying or changing the program near the locations listed above.
Please choose the Technical Support command on the Visual C++
 Help menu, or open the Technical Support help file for more information

The file test.c is an empty file, but that error shows up regardless of input file you specify. What happend here?

Turns out, there's check in c1.dll which basically equivalent to this C code

CPINFO cpInfo;
UINT chcp = GetACP();
GetCPInfo(chcp, &cpInfo);

if (cpInfo.MaxCharSize > 2) internal_error("f:\dd\vctools\compiler\cxxfe\sl\p1\c\p0io.c", 2812);
 
It assume the max multi-byte size is 2 bytes max, but in this case, I enabled a feature called "Use Unicode UTF-8 for worldwide language support", thus these what happends:
  1. GetACP returns 65001
  2. GetCPInfo returns information about UTF-8 code page, where max char size is 4.
Is there any workaround for this? I'm afraid there's no way. Basically we must make sure cl.exe didn't see 65001 as codepage, otherwise there's explicit check for it. There's Locale Emulator but that only emulates locale string, not code page.

If anyone found how to update VS2013 in 2019, please comment below. Yes, using VS2013 is mandatory in my case because I need to ensure compatibility with Windows Vista where it must target lower than Windows 7 SP1.

Friday, January 25, 2019

Re-Implementing Live2D runtime in LÖVE: Performance Optimization

Please see my previous blog post for more information. Do you think I'm really satisfied with 1.2ms performance? No. I think I can do more. Note that when I wrote a time measurement, that means it's time taken to update Kasumi casual summer model (shown below).


Code Optimization

I see that after my previous blog post, there's many optimization that can be done. I started it by reducing the temporary table creation. Instead of creating new table at function body, I created new table at file scope and reuse that table over and over. In the motion manager code, I used this variant of algorithm to remove motion data when necessary. I also localize function that's called every frame, mostly functions from math namespace like math.min, math.floor, and math.max. I also do cache more variable if that variable is used multiple times to reduce table lookup overhead.

Although the optimization I listed above doesn't really save significant amount of time when JIT is used, it's somewhat significant optimization for non-JIT codepath. Next optimization is by converting hair physics code to use FFI datatype for JIT codepath, and class otherwise. Testing gives better performance, 1.17ms. Not much but it's better than nothing.

Problem arise, when I inspect the verbose trace compiler output, I noticed lots of "NYI: register coalescing too complex" trace abort in the curved surface deformer algorithm, which indicate I'm using too many local variable there. At first this was bit hard to solve, but I managed to optimize it by analyzing the interpolation calculation done by curved surface deformer algorithm. Then it solve the trace aborts entirely. Testing gives slightly better performance, 1.15ms.

Rendering Optimization

The last optimization I done is the Mesh optimization. Since I copied Live2LOVE Mesh rendering codepath as-is, it's actually uploading lots of duplicate data to the GPU, duplicating the vertices based on vertex map manually in CPU side because I thought the vertex map can change. This can be very slow for the non-JIT codepath because the amount of data needs to be send in Mesh:setVertices can be too much. As a reference, before doing this optimization, the non-JIT codepath (LuaJIT interpreter) took 6ms.

After having better overview how Live2D rendering works, I'm safe to assume vertex map won't ever change, so I start by reducing amount of vertices that needs to be uploaded to GPU and send the vertex map. This gives more significant performance boost in CPU-side actually. The JIT codepath now runs at 1.05ms, it's very very close to Live2LOVE 1ms. Interpreter (LuaJIT) took 4ms, yes 4ms to update the model. Unfortunately, vanilla Lua 5.1 took as long as 12ms to update the model.

The non-JIT codepath is forced to use table variant of Mesh:setVertices because the overhead of FFI is higher than the benefit of using Data variant. Also the non-JIT codepath can't assume FFI is available at all. LuaJIT can be compiled without FFI support (but who wants to do this?) or it maybe run in vanilla Lua 5.1. One of my goal for this project is to provide maximum compatibility with Lua 5.1 too, despite LÖVE is compiled using LuaJIT by default.

Experimental Rendering Codepath

Unfortunately I have to throw away the mesh batching technique I mentioned in my previous blog post. This mesh batching technique causes very significant slowdown both in JIT and non-JIT codepath with very little performance improvement in GPU, so I decide to abandon this and use the old approach of updating models, drawing Mesh one by one. You can see at screenshot below that the model took 166 drawcall

and additional drawcall caused by IMGUI.

Wednesday, January 16, 2019

Re-Implementing Live2D runtime in LÖVE


 (video above shows my implementation in action using LÖVE framework)

Live2D is a nice thing, the fluid character movement gives additional touch to the game which uses it. For my personal use, however it has annoying limitation: Lua is not officially supported, especially LÖVE. Well, there's 2 ways to overcome this.

Writing external Lua C module

This is probably the simplest way (but not that easy). Link with Live2D C++ runtime library files, add code which interact with Lua (and LÖVE), then you got Live2LOVE. This module is actually very fast, considering the Lua to C API overhead, and works by letting Live2D to do the model transformation and LÖVE to do the model rendering. Since it uses official Live2D runtime, it has these limitations:
  • Must link with OpenGL. This is not a problem since LÖVE uses OpenGL already.
  • VS2017 is not supported (you have to use Cubism 3 for that). However it supports down to VS2010, but LÖVE requires VS2013, so this is not really a problem unless you compile LÖVE to use VS2017 runtime.
  • MinGW/Cygwin compilation is not supported. Not really a problem since compiling LÖVE in Windows using MinGW/Cygwin itself is not supported.
  • Linux and macOS is not supported. This is the real problem. Not all people use Windows to run LÖVE.
So another idea that comes to my mind is:

Re-Implement Live2D Cubism 2 Runtime

In Lua, because why not. This is actually somewhat time-consuming process and took me more than 3 weeks to have model rendering working as intended. My additional goal for this is to have Live2LOVE-compatible interface too, so switching between implementation is simply changing the "require" module. From now on, I'll refer Live2D Runtime as Live2D Cubism 2 Runtime.

I start by downloading Live2D Runtime for WebGL (Javascript) and beautify the code (since it's in .min.js file). As expected, the function method names are obfuscated. So, I unpacked Android version of Live2D C++ Runtime and deobfuscate the method name by matching the arguments from Live2D Runtime C++ header files and with help of IDA pseudocode decompiler to compare the implementation with the Javascript ones. This whole process took 2 weeks.

Then I start by writing the Lua equivalent code based on Javascript Live2D Runtime. This is the easiest to do since Javascript is also dynamically typed. Carefully translating 0-based indexing code to 1-based. Then fixing bugs, writing Live2LOVE-compatible interface so I can use existing Live2D viewer code that using Live2LOVE, and testing.

After a week, I got model rendering, motion, and expression working. Using existing code from my LÖVE-based Live2D model viewer to use my own implementation instead of Live2LOVE. Then the next problem comes: it's 4x slower than Live2LOVE. Live2LOVE took 1ms to render Kasumi (casual summer clothes) while my implementation took 4ms to render same model. I already code the implementation carefully so that LuaJIT happily accepts my code and won't bail out to interpreter as possible.

I started optimization by using "Data" objects instead of plain table when updating the "Mesh" object for drawing. This cuts down the update time significantly from 4ms to 1.7ms so using table to update the "Mesh" object is always a bad idea. Someone in LÖVE Discord then says "try to use all FFI instead of plain table". At first I did not agree with him because I want to preserve compatibility with mobile, but then I decide to proceed by having falling back to tables in case FFI is not suitable (JIT is off, FFI support is not enabled, or using vanilla Lua). I swapped most types from plain table to FFI objects and I can get as low as 1.2ms, almost close to Live2LOVE 1ms.

Conclusion

Re-implementing Live2D Runtime is a nice experience. It gives me better overview when to start optimizing code instead of optimizing early and overview of how Live2D model transformation works. Apparently it can't beat C++ version of the official Live2D Runtime in terms of model updating, but I think it can beat it in terms of model rendering. I'm thinking of "Mesh" batching technique, which is basically: accumulate vertices to render then draw'em'all at once if flush requested. I'm currently satisfied with the current result, but I think I still can do better
 

... and hope to God it can success without problems.

Saturday, December 1, 2018

San Andreas "Modern" Modding Tools & Some Rant

When I search for some tutorials in Google about modding GTA San Andreas, it's mostly still uses very old tool. Usually it's Indonesian modding community that always uses old modding tools for "compatibility" reason, but in fact old tools were unstable, and of course newer one means more stable (although may need modern system). For todays blog post, I just want to tell you that there are better tools out there that you can use as replacement. Note that this blog post can be considered as rant, depending on your perspective.

First, IMG editor. This is essential tool for modifying the game models and textures. Usually, the blog post tells you to use "IMG Tool v2.0". Excuse me, there's a way better tool for this, and it's Alci's IMG Editor. Compared to IMG Tool v2.0, Alci's IMG Editor is way more superior. Having bigger windows, import/export multiple files, and most importantly, it can create new IMG file from scratch, unlike IMG Tool, where you only restricted to existing IMG file.

Second, TXD tool. This is must-have tool if you plan to modify texture files (creating your own paintjob). Usually, the blog post tells you to use "TXD Workshop", but excuse me, TXD Workshop is somewhat very limited, and last time I tired the 15 years edition, the mipmap generation is buggy. If you interested how it buggy, my image is 4096x4096, compressed to DXT1, then generate 12 levels of mipmaps. It took around 3 minutes in my i5 7th gen laptop, and the result, it fails to generate mipmap for odd-levels (3, 5, 7, ...). This result in "black shade" when the game render the texture. So let me introduce you to Magic.TXD,  modern version of TXD Workshop, easy to use, and it generates mipmap faster and not buggy of course. It also comes with localization popular among the SA modders including Indonesian.

The texture created with TXD Workshop + mipmap. Level 3, 5, 7, and 11 is buggy and game will render that to black (notice shade of black near the headlights). That's why it looks dark for some reason.
The texture create with Magic.TXD. Now everything looks nice and no more black shade.

For this one, it's not really modding tool, but I want to put it anyway. If you're script modder, then you may want to consider MoonLoader, which is "modern version" of CLEO. Do you know that CLEO scrips are actually slow compared to Lua script? or, you hate CLEO scripts and prefer writing in different language? Then you got MoonLoader. One most notable feature is the error handling. CLEO script errors? Goodbye to current game session. MoonLoader script errors? Error is logged then script execution stopped.

As a side note, if you only want to dump TXD texture to PNG or TGA, FFmpeg supports TXD file and can transcode it to supported image formats (but you can't create TXD sadly).