tag:blogger.com,1999:blog-8803068836732986492024-03-14T15:36:10.995+08:00MikuAuahDarkMikuAuahDark personal blog.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.comBlogger47125tag:blogger.com,1999:blog-880306883673298649.post-45025933954633577582023-09-17T23:57:00.004+08:002023-09-18T00:09:13.561+08:00How Network Interface Metric Mess With Genshin Login System<p>TL;DR: If your Genshin Impact, for some reason, unable to save your login session across shutdowns or reboots, check the Interface Metric numbers across shutdowns/reboots in your network settings and make sure it stays same.</p><p>Remember when I have issues with Genshin Impact <a href="https://auahdark687291.blogspot.com/2021/04/a-review-of-new-laptop-hp-envy-x360-13.html" target="_blank">logging me out on system reboot or shutdown 2 years ago?</a> Well now it happends again <a href="https://auahdark687291.blogspot.com/2022/10/the-case-of-rare-laptop-sku-lenovo.html" target="_blank">in the laptop I bought a year ago</a>. This time however, I decided to diagnose it further.</p><p>Ok first of all, it's important to know what's changed before the issue occur. Since I know that it doesn't happen last week, it must be some sort of driver updates. From my previous experience, it has to do with network drivers. This means I can narrow it down to network driver updates. I remember Windows Update asked me to restart to install Wi-Fi driver. Gottem, it must be bad Wi-Fi drivers. Uninstalled it, restarted it, and now the game loads fine.</p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEQkAiFQXkjpBMWRoUyAakLJditAIVmgJvOExqy2dJCFWrlIOumabqgL0gvl__KeadnO-DybPYVPA8O1R2jpre4wd6z0en8C2gGQ3S8sPcXaTDHvbOtDW-9U9kAPCN54g9_VMfljy74QTq2kXlqVYVwShPHZGt2FmtRYFrvNBJrHywqZOFBpHyoLQwjEqe/s1920/temporary.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1080" data-original-width="1920" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEQkAiFQXkjpBMWRoUyAakLJditAIVmgJvOExqy2dJCFWrlIOumabqgL0gvl__KeadnO-DybPYVPA8O1R2jpre4wd6z0en8C2gGQ3S8sPcXaTDHvbOtDW-9U9kAPCN54g9_VMfljy74QTq2kXlqVYVwShPHZGt2FmtRYFrvNBJrHywqZOFBpHyoLQwjEqe/w400-h225/temporary.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Network driver messing itself and watch me suffer as Genshin doesn't remember me again.</td></tr></tbody></table>... except the next day, it doesn't. Could it be some nondeterminism with network order going on? Ok so I tried an experiment. Restarted the system, game loads fine. Okay good. Shutdown the laptop then start it again. Now the game doesn't remember me. Ok then shutdown again, nope. Restarted it, works. Shutdown it again bypassing "Fast Startup" (Protip: Press Shift while pressing "Shutdown), works. Shutdown with fast startup? nope.<p></p><p>From that experiment I can conclude that:</p><ol style="text-align: left;"><li>Fresh start (from reboot or full shutdown) will make the game think I'm on current device, logging me in.<br /></li><li>Starting up from fast startup will make the game thinkthe network adapter has changed, logged me out.</li></ol><p>Then, what else can we use to find out the exact issue? In Windows 11, there's "Hardware and connection properties". It lists all adapters excluding WSL/Hyper-V adapter but including Wi-Fi Direct adapters. Re-doing the experiment while watching this gives me some insights:</p><ol style="text-align: left;"><li>On fresh start, the Ethernet is listed on topmost.</li><li>On starting from fast startup, the Ethernet is listed on the bottom, not bottommost.</li></ol><p>Well, okay that probably explains why. Genshin Impact "calculates" the key to load the session key based on the network order. So the next question is, how to change the order?</p><p>Googling "windows change network order" mentions changing "Interface Metric". <a href="https://www.windowscentral.com/how-change-priority-order-network-adapters-windows-10" rel="nofollow" target="_blank">There are 2 ways, with Powershell and with GUI</a>. The powershell command is "Get-NetIPInterface", when run, shows as follows.</p><p></p><p></p><p></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="https://cdn.discordapp.com/attachments/922182394323292173/1152589110029070426/gambar.png" class="transparent" height="204" src="https://cdn.discordapp.com/attachments/922182394323292173/1152589110029070426/gambar.png" style="margin-left: auto; margin-right: auto;" width="400" /></td></tr><tr><td class="tr-caption" style="text-align: center;">Top is the problematic output. Bottom is the correct output that Genshin recognize for my laptop.<br /></td></tr></tbody></table><p></p><div style="text-align: center;"></div><p></p>Redoing the experiment again reveals that the "Ethernet" "InterfaceMetric" is set to 5 on fresh start (which it remembers my login). However on fast startup, it sets to 75. Just to confirm, it was set to 75. Entering the game, the game doesn't remember me. Ok set the "InterfaceMetric" of "Ethernet" IPv4 and IPv6 to 5 (see link above on how-to), and viola the game remembers me back.<p></p><p>This finally solves the mystery that I also had 2 years ago. It should be same issue where "InterfaceMetric" went non-deterministic across reboots/shutdowns. I was about to schedule my whole day to perform clean installation of the laptop but the mystery is solved once for all</p><p> </p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJeY5xRckw2YCNlTHj2LKhbsCVHhbSqjwNKAWWe_C8vh8127VVgP4WpDlcfNYQnGrCwC3pXmjDyQBzaz32UlkfkKzJZ_q82oak5vzIKo8BIFwBuOnp8dZAtCR0o9NSnK-pTCmuTvEe3C3w8a6Scziqgyy2ixtuqAzTyouyqAZwnJ9sKewyFUMOxR3tGPVL/s1652/temporary2.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="768" data-original-width="1652" height="186" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJeY5xRckw2YCNlTHj2LKhbsCVHhbSqjwNKAWWe_C8vh8127VVgP4WpDlcfNYQnGrCwC3pXmjDyQBzaz32UlkfkKzJZ_q82oak5vzIKo8BIFwBuOnp8dZAtCR0o9NSnK-pTCmuTvEe3C3w8a6Scziqgyy2ixtuqAzTyouyqAZwnJ9sKewyFUMOxR3tGPVL/w400-h186/temporary2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Me looking <a href="https://genshin-impact.fandom.com/wiki/Navia" rel="nofollow" target="_blank">her</a> the first time be like: "<a href="https://www.youtube.com/watch?v=UwIZZZI92Yc" rel="nofollow" target="_blank">Yup. I'm going to marry this girl!</a>"<br /></td></tr></tbody></table>... until it logs me out again. <br /><p></p><p></p><br /><br />Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-76449314469877342972023-01-19T00:35:00.000+08:002023-01-19T00:35:36.143+08:00Can I have 10ms Latency in Genshin Impact in Indonesia?<p>On the other day, my friend wonder if there will be an Indonesia ISP which allows him to play Genshin Impact with 10ms ping.</p><p>Well, sorry to disappoint, but that's impossible. No, it's not because our technology is advanced enough (actually it is), but it's not possible because there's hard limit related to the laws of physics: speed of light.</p><p>Why? First let's do some background. Genshin Impact is a single-player-oriented online game (weirdly). Since it's online, it has various servers which are <a href="https://genshin.global/game-servers/" rel="nofollow" target="_blank">Europe, America, Asia, Taiwan</a>, and China (only for people who live in China). Focus on the Asia, the server is hosted in Tokyo, Japan. My friend and I live in Indonesia.</p><p>Now the reason why it's impossible because the distance between Indonesia and Japan is <a href="https://distancecalculator.globefeed.com/Distance_Between_Countries_Result.asp?fromplace=Indonesia&toplace=Japan" rel="nofollow" target="_blank">4821.39 km</a> (Wolfram says it's 4912 km, see below). The speed of light constant is 299792458 m/s. Doing the calculation tells me the minimum attainable latency by that distance is <a href="https://www.wolframalpha.com/input?i=distance+from+Indonesia+to+Japan+divided+by+speed+of+light" rel="nofollow" target="_blank">~16.38ms</a>. That means I need around 1 game frame at 60Hz monitor with VSync (or 1 game frame in Genshin Impact) to travel from Indonesia to Japan. Multiply it by 2 for the round-trip latency gives you <b>~32.76ms</b>. By that alone, it's been concluded that getting 10ms ping is <b>not possible.</b><i> </i></p><p><i>"But data transfer instantly"</i><br />No it's not. It still obey the laws of physics. That means in best case, your data transfer speed is limited by speed of light. Well, the <i>best case</i>. In fact, <a href="https://www.jumpfiber.com/fiber-optics-speed-of-light-broadband-internet/" rel="nofollow" target="_blank">light travels slower in fiber optics</a>, around 2/3 of it. Taking that into account, the minimum attainable latency is <b>~49.15ms</b>. This can be worsen furthermore by additional latency introduced by your WiFi and/or router and the ISP on both ends. That means, <b>49.15ms is the data transfer speeds only, ignoring the router and ISP latency</b>.</p><p><i>"Alright then, but I want exactly 10ms latency. I don't want to live in Japan though. Where should I live?"</i><br />For this, I assume the overhead latency of your router and ISP is ignored (a.k.a exactly 0ms). For 10ms round-trip latency, you need to live at 1998 km away from Tokyo, Japan. The closest would be South Korea, then Shanghai, China (but in this case you better go with their China client with China servers for minimum latency), then area around Sakhalin Oblast in Russia. If you want to take your router and ISP latency into account then you may want to live in Yuzhno-Sakhalinsk in Russia or South Korea. It's as closest to Japan without having to live in Japan.<br /><br />Note that if you have copper wire running from Russia to Japan instead of fiber optic, it <a href="https://networkengineering.stackexchange.com/q/16438" rel="nofollow" target="_blank"><i>may be faster</i></a>, but electromagnetic interference will assure ...<br /><br />... you're gonna have a bad time</p><p>Also on cohost: <a href="https://cohost.org/AuahDark/post/866048-can-i-have-10ms-late">https://cohost.org/AuahDark/post/866048-can-i-have-10ms-late</a><br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-54891818060826365052022-11-20T16:47:00.005+08:002022-11-22T11:03:37.165+08:00Slashes in Lua "require" function: Use periods/dots!<p>This is somewhat a kind of misunderstood but I hope this blog post will resolve this once for all. I often saw people using Lua <code>require</code> like this</p>
<pre><code class="language-lua">local gamera = require("libs/gamera")
local nvec = require("libs/nvec")
-- Rest of the code</code></pre>
<p>At first glance, there's nothing wrong right? No. Using slashes in <code>require</code> works because it's an accident, but no this is not a happy accident. What you should do is:</p><pre><code class="language-lua">local gamera = require("libs.gamera")
local nvec = require("libs.nvec")
-- Rest of the code</code></pre><p>So, why the dots there? Because <code>require</code> is not expecting <b>path</b>, it's expecting <b>module name</b>. To explain why, first let's take a tour to <a href="https://www.lua.org/pil/8.1.html" rel="" target="_blank">Programming in Lua Section 8.1</a>, with important text marked as bold.<br /></p><blockquote><p>The path used by <code>require</code> is a <b>little different from
typical paths</b>.
Most programs use paths as a list of directories wherein to search
for a given file.
However, ANSI C (the abstract platform where Lua runs)
does not have the concept of directories.
Therefore, the path used by <code>require</code> is a <b>list of <i>patterns</i></b>,
each of them specifying an alternative way to transform
a <b>virtual file name</b> (the argument to <code>require</code>)
into a real file name.</p></blockquote><p>So, it's clear that <code>require</code> does <b>not expect a path to filename</b>, but it expects <b>module name</b> (or virtual file name; we'll use module name from now on). To understand how Lua transform the module name to actual path that Lua will try to load, it's important to know about <code>package.loaders</code>.</p><p><code>package.loaders</code> is an array of function which tries to load a module based on module name passed by <code>require</code>. The <a href="https://www.lua.org/manual/5.1/manual.html#pdf-package.loaders" target="_blank">Lua 5.1 manual has more information</a> about this, but I'll try to explain it as simple as possible. In Lua 5.1 (and LuaJIT), there are 4 loaders in this entry but I'll only explain the first 3, tried in this order: </p><ol style="text-align: left;"><li>Checks for existing module loader in <code>package.preload</code> table with the module name (passed from <code>require</code>) as the table key, such that when called, it loads the module. If it's non-nil, then the value is returned.<br /></li><li>Replace <b>all dots in module name</b> to OS-specific directory separator (we'll call this <b>file path</b>). Then for each semicolon-separated path specified in <code>package.path</code>, substitute question mark with the file path then try to open that as Lua file. If it's loaded successfully then the function chunk is returned.</li><li>For this, it needs 2 components: <b>file path</b> and <b>entry point</b>. Replace <b>all dots in module name</b> to OS-specific directory separator to get <b>file path</b>, and replace <b>all dots in module name</b> to underscore with luaopen_ prepended to get <b>entry point</b>. Then for each semicolon-separated path specified in <code>package.cpath</code> (note the "c" in cpath), it tries to load said file as shared library (or DLL in Windows), then returns Lua C function with specified <b>entry point</b> inside the shared library.</li><li>It's all-in-one loader, doesn't matter in our case. </li></ol><p>If you're still confused, this pseudo-Python code will help you know how it works.</p><pre><code class="language-python">import os
package.preload = dict()
package.path = "?.lua;path/to/?.lua"
package.cpath = "?.dll;?.so;path/to/?.dll;path/to/?.so"
def loader_1(modname):
module = pacakge.preload.<a href="https://docs.python.org/3/library/stdtypes.html#dict.get" target="_blank">get</a>(modname)
if module is not None:
return module
return f"no field package.preload['{module}']"
def loader_2(modname):
file_path = modname.<a href="https://docs.python.org/3/library/stdtypes.html#dict.get" target="_blank">replace</a>(".", <a href="https://docs.python.org/3/library/os.html#os.sep" target="_blank">os.sep</a>)
tested = []
for path in package.path.<a href="https://docs.python.org/3/library/stdtypes.html#str.split" target="_blank">split</a>(";"):
file_name = path.replace("?", file_path)
chunk = <a href="https://www.lua.org/manual/5.1/manual.html#pdf-loadfile" target="_blank">load_lua_file</a>(file_name)
if chunk is not None:
return chunk
tested.append(f"no file '{file_name}'")
return "\n".<a href="https://docs.python.org/3/library/stdtypes.html#str.join">join</a>(tested)
def loader_3(modname):
file_path = modname.replace(".", os.sep)
entry_point = "luaopen_" + modname.replace(".", "_")
tested = []
for path in package.cpath.split(";"):
file_name = path.replace("?", file_path)
module = open_shared_library(file_name) # <a href="https://man7.org/linux/man-pages/man3/dlopen.3.html" target="_blank">dlopen</a> or <a href="https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibrarya" target="_blank">LoadLibraryA</a>
if module:
symbol = get_symbol(module, entry_point) # <a href="https://man7.org/linux/man-pages/man3/dlsym.3.html" target="_blank">dlsym</a> or <a href="https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-getprocaddress" target="_blank">GetProcAddress</a>
if symbol is not None:
return <a href="https://www.lua.org/manual/5.1/manual.html#lua_pushcfunction" target="_blank">make_symbol_callable</a>(symbol)
close_shared_library(module) # <a href="https://man7.org/linux/man-pages/man3/dlclose.3.html" target="_blank">dlclose</a> or <a href="https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-freelibrary" target="_blank">FreeLibrary</a>
tested.append(f"no file '{file_name}'")
return "\n".join(tested)
</code></pre><p>If that's clear enough, then stop reading and start fixing your <code>require</code> by replacing slashes with dots! </p><p><span></span></p><a name='more'></a><p></p><p>"But using slashes feels more natural", you'll lose an argument if the language specifies how something behaves and you're against it unless you have very strong reason, and "feels more natural" reason is not one of them.</p><p>Consider these scenario: You have module located at "libs/mymodule.lua". Loading it as <code>require("libs/mymodule")</code> works accidentally, but now consider you're replacing "libs/mymodule.lua" with "libs/mymodule.dll" or "libs/mymodule.so". Now above <code>require</code> call won't work because the symbol does not exist (see <code>loader_3</code> function above for why).</p><p>This is so misunderstood that <a href="https://github.com/love2d/love/pull/1782" target="_blank">I have to push a change in next major version of LÖVE</a> to warn every users who uses slashes in their <code>require</code>.<br /></p><div><p></p></div>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-76196659342752324752022-10-08T01:59:00.000+08:002022-10-08T01:59:01.194+08:00The Case of Rare Laptop SKU: Lenovo Ideapad Gaming 3 15ACH6<p> I bought a gaming laptop a month ago (as of writing this blog post). The laptop name is in the title, with these key-selling specs:<br /></p><ul style="text-align: left;"><li>AMD Ryzen 5 5600H</li><li>RTX 3060 Laptop (90W)</li><li>16GB of DDR-3200 RAM at dual-channel, upgradeable<br /></li><li>1920x1080 165Hz display, 100% sRGB</li><li>512GB NVMe SSD (which I later upgraded with additional 512GB) <br /></li></ul><p>Honestly getting RTX 3060 laptop for cheap (I really mean "cheap", <a href="https://auahdark687291.blogspot.com/2021/04/a-review-of-new-laptop-hp-envy-x360-13.html" target="_blank">my touchscreen laptop</a> is more expensive) is a blessing. The cooling are also good and I never able to reach 80 degrees Celsius when I use it for gaming (70 is the highest). Now there's just a slight issue: This is a rare SKU.</p><p>Why? There are no reviews of this laptop with this GPU. If you look at internet, all reviews of this laptop is the ones with RTX 3050 or 3050 Ti. None of them with 3060. This means there's no way for me to have expectation on the performance and there's no way for me to evaluate the pros and the cons of this variant. I'm on my own.<br /></p><p><a href="https://psref.lenovo.com/syspool/Sys/PDF/IdeaPad/IdeaPad_Gaming_3_15ACH6/IdeaPad_Gaming_3_15ACH6_Spec.pdf" rel="nofollow" target="_blank">Lenovo published Product Specifications Reference</a> for this laptop but even some of the information there does not reflect 100% the ones I have. Some of those are:</p><ul style="text-align: left;"><li>The sheet wrote one of the SSD slots runs at PCIe 3.0x2, but on my unit it runs on 3.0x4 on both slots, checked with Crystal Disk Info.<br /></li><li>The sheet wrote one of the SSD needs 2242 form-factor (this one is running at 3.0x4), but this is not the case for my unit. They both accepts 2280 form-factor. I mistakenly bought 2242 SSD without knowing because I assume their PSR are correct.</li></ul><p>Fortunately I'm relieved to find out that there are many advantages of getting this variant. For example, the RTX 3050 with maximum TGP still beaten by RTX 3060 with lowest TGP and the fact that 3060 Laptop it's <a href="https://www.youtube.com/watch?v=MlRxtSIqEhk&pp=ugMICgJpZBABGAE%3D" rel="nofollow" target="_blank">almost on-par</a> with the desktop variant (3050, 3070, and 3080 has significant difference). Furthermore the higher VRAM (6GB instead of 3050 4GB) means I can barely run <a href="https://github.com/neonsecret/stable-diffusion" rel="nofollow" target="_blank">Stable Diffusion</a> locally</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin4sUC5wAP00xstfo5U4Qu0Xfb1B0CMNMByashmNuGGVDQlG8Wz7a2XT3s0Tg1zl9X1eOnn2pcZCQ8QOxb4vI9zGReoVu5r1jGt9qHahkfoMuhnkTq2bwaUEa1k_de85sS7wr1yPm-6RgKIYoLg3-yLV-0-TQMPZfwS48r9kdzAMJZ_ztM3YxXy9LbYw/s512/seed_295668_00021.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="1girl long_hair blue_aqua_hair purple_eyes red_glasses blunt_bangs straight_hair black_blazer red_ribbon black_skirt school_girl cat_ears" border="0" data-original-height="512" data-original-width="512" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin4sUC5wAP00xstfo5U4Qu0Xfb1B0CMNMByashmNuGGVDQlG8Wz7a2XT3s0Tg1zl9X1eOnn2pcZCQ8QOxb4vI9zGReoVu5r1jGt9qHahkfoMuhnkTq2bwaUEa1k_de85sS7wr1yPm-6RgKIYoLg3-yLV-0-TQMPZfwS48r9kdzAMJZ_ztM3YxXy9LbYw/w320-h320/seed_295668_00021.png" width="320" /></a></div><p></p><p>... to generate <a href="https://github.com/harubaru/waifu-diffusion" rel="nofollow" target="_blank">waifus</a>.<br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-45859317335378301952021-08-03T11:49:00.004+08:002021-08-03T11:49:30.599+08:00Repeating Sakura Cleansing Ritual<div><p><b>Spoiler alert</b>: If you haven't unlocked the Sakura Cleansing Ritual world quest or Inazuma, don't read this blog post! <br /></p><p>This is supposed to be a simple blog post related to Genshin Impact. I have pending blog post which I'm too lazy to write, but I plan an educating blog post using Genshin Impact and programming in the future.</p><p>I think most people will say this is one of the best, and saddest world quest. Some people (including me) however used the Kazari's mask so if you want to see her memories <b>in-game</b> then it's gone forever.</p><p></p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/xgdfOjlbWFs" width="320" youtube-src-id="xgdfOjlbWFs"></iframe></div><p></p><p>... not quite. Turns out you can repeat some of the sakura cleansing ritual. That means, if you do the correct sequence, the enemies and the dialog will show as if you're still doing the world quest. However, you can only repeat this once per day (not sure if relogin allows you to repeat this?). You can repeat this sakura cleansing in these locations:</p><ul style="text-align: left;"><li>Mt. Yougou (Abandoned shrine) <br /></li><li>Chinju Forest (The ones where you need to find Tanukis)<br /></li><li>Kamisato Estate (With the electro radiation)<br /></li></ul><p>That means you can't repeat ones in:</p><ul style="text-align: left;"><li>Konda Village. Nothing happends when you do the correct sequence.<br /></li><li>Araumi. Even if you managed to use Memento Lens to reveal the other parts and do the correct sequence, nothing happends.</li></ul></div><p>So if you miss her and you already used the mask, then performing some part of the ritual may make you feel better. The other memories I have on her is</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-fMQaKT4D3Rc/YQi8pqdaSsI/AAAAAAAAA2w/151jxquw1p0Uhun3mshs5kbM13C89kPKwCLcBGAsYHQ/s1920/temporary.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1080" data-original-width="1920" height="225" src="https://1.bp.blogspot.com/-fMQaKT4D3Rc/YQi8pqdaSsI/AAAAAAAAA2w/151jxquw1p0Uhun3mshs5kbM13C89kPKwCLcBGAsYHQ/w400-h225/temporary.png" width="400" /></a></div><br /> ... a screenshot<br /><p></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-80353927837022408222021-04-29T11:50:00.004+08:002021-07-20T20:58:57.374+08:00A Review of New Laptop: HP Envy x360 13 (2020)<p>So I finally have new replacement over my old ASUS A456UR laptop which I used for more than 3 years. Here's my old laptop specifications:</p><ul style="text-align: left;"><li>Intel Core i5-7200U</li><li>NVIDIA GT 930 MX</li><li>8GB of DDR3L RAM (not quite sure for the type actually). It was come with 4GB soldered RAM.<br /></li><li>512GB of SATA SSD. It was come with 1TB hard drive which now I repurpose as external drive.</li><li>1366x768 screen, NTSC</li></ul><p>And here's my <a href="https://www.laptopmag.com/reviews/hp-envy-x360-13-2020" rel="nofollow" target="_blank">new laptop</a> specifications:</p><ul style="text-align: left;"><li>AMD Ryzen 7 4700U</li><li>16GB of DDR4-3200 RAM (both soldered)</li><li>512GB of M.2 SSD. I have plan to upgrade it to 1TB later, but it's not my priority at the moment.</li><li>1920x1080 screen , 98% sRGB</li><li>Touchscreen and stylus support, 360 degree hinge<br /></li></ul><p>From gaming perspective, this is a downgrade since my previous laptop had NVIDIA Optimus but from my overall perspective, it's overall upgrade (the 16GB RAM and the 8-core processor is important). I also ended up picking AMD because they have nice reputation nowadays and I don't want to fall into <a href="https://community.intel.com/t5/Graphics/BUG-dwm-exe-uses-memory-leakage-with-Intel-HD-Graphics-630/td-p/1222297" rel="nofollow" target="_blank">this Intel driver problem</a>. Furthermore Genshin Impact doesn't play well with Intel GPUs/drivers.</p><p>NOTICE: Consider performing clean install when you get the laptop <b>after</b> you reedemed the Office Home & Student and Dropbox Promotion (if available).<br /></p><h2 style="text-align: left;">First Impression</h2><p style="text-align: left;">My first impresion, this laptop is quite small. It's interesting to see a smaller laptop have better specs than bigger laptop. This laptop also has Windows 10 and Office 2019 for free (although the latter may not available depending where you ge tthe laptop, ask the store first!) so I don't have to use cracked version of those which was my case for the old laptop (that I eventually reinstalled to ArchLinux).</p><p style="text-align: left;">Booting speed is fast. I don't have to see any moving circle loading indicator and it just straight boot into the lock screen, or is it just be first time experiencing laptop with M .2 SSD. The screen is also bit yellow, but quick search shows that sRGB tends to bit yellow due to its white point stands at 6500k.</p><p style="text-align: left;">It has some bloats which you can simply uninstall. For example: I'm not a fan of McAfee so I uninstalled it, thus Windows will switch back to its built-in Windows Defender. It also comes with Dropbox promotion (may vary depending on stores) which I simply reedem to my dummy Dropbox account then uninstall. There's probably one most important app that you'll most likely use: The HP Command Center. This one controls the thermal profile of the laptop. Because the laptop has aluminium chasis, it can be uncomfortable to touch on long CPU (and GPU) stress usage.<br /></p><h2 style="text-align: left;">Battery Life</h2><p style="text-align: left;">The battery life is simply amazing. I often use the laptop unplugged and it's enough to attend 2 consecutive online class which both can lasts for 2 and half hours. At night, it lasts for 5 hours and that even still left me with 25%.</p><h2 style="text-align: left;">Performance</h2><p style="text-align: left;">8 cores without SMT is fine. Definely an upgrade from my previous laptop (2 cores + SMT). I feel significant speedup on compiling Android projects, transcoding, and everything in general while still being efficient. There's one thing to note however: if you're pushing the whole cores to 100% (i.e. video encoding) then you need to take care of the thermal. If you're using the laptop in air-conditioned room then it's fine to leave the thermal profile on performance. Otherwise you'll most likely need external fan.</p><h2 style="text-align: left;">Touchscreen & Stylus</h2><p style="text-align: left;">The touchscreen and the stylus functions okay, not perfect. For some reason, the touch input and the stylus input has jittery/wavy lines, so the pen can only used for coloring or quick writing at best. Some reviewers said it's MS Pen Protocol to blame however. You also need to press slightly harder for the stylus input to register. For the touch problem however, I don't think it's a problem and it's a plus point on having touchscreen because I can test touchscreen-specific code in my LOVE projects.</p><p style="text-align: left;"> </p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/aQ2Udxywfs8" width="320" youtube-src-id="aQ2Udxywfs8"></iframe></div>(seek to 2:01 to see what I'm talking about the jittery thing)<br /><p></p><h2 style="text-align: left;">Bluescreen</h2><p style="text-align: left;">If you encounter bluescreen with SYSTEM_THREAD_EXCEPTION_NOT_HANDLED message pointing to msgpioclx.sys when performing full shutdown (restart also counts), then it's <a href="https://answers.microsoft.com/de-de/windows/forum/all/windows-10-crashes-msgpioclxsys/0df8fc3c-2b6b-4ae1-b0e7-ed7ac2c82bc6" rel="nofollow" target="_blank">most likely the WiFi driver</a>. Simply rollback the Realtek WiFi driver to resolve the issue. If it gets installed again, then use Windows Update Minitool to blacklist/hide the driver update.</p><p style="text-align: left;">UPDATE: If you encounter Live Kernel Event without bluescreen (screen just blank then restarts), then it's the AMD driver to blame. Downgrade to 27.20.11044.7!<br /></p><h2 style="text-align: left;">Genshin Impact</h2><p style="text-align: left;">Why do I even put this? Well fine.</p><p style="text-align: left;">The game recommends medium settings. Mind you, the game runs with integrated graphics and the default recommended settings can run the game at 30 FPS. I prefer 60 FPS so I can simply turn off motion blur and set the render to 0.8.</p><p style="text-align: left;"></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-ITpHrzD4zjg/YIoqKpUVgfI/AAAAAAAAAzs/hxpfMtFVtV0I2D6dTaDFZnNYMw1PraDqgCLcBGAsYHQ/s1280/gdefsett%252Cpng.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="863" data-original-width="1280" height="270" src="https://1.bp.blogspot.com/-ITpHrzD4zjg/YIoqKpUVgfI/AAAAAAAAAzs/hxpfMtFVtV0I2D6dTaDFZnNYMw1PraDqgCLcBGAsYHQ/w400-h270/gdefsett%252Cpng.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Default graphics setting.<br /></td></tr></tbody></table><p></p><p style="text-align: left;">Again, make sure to have external fan or playing in air-conditioned room and set the thermal profile to performance. Otherwise you're risking the laptop to overheat. There was one case where I overheated the laptop, resulting in force shutdown. At that time, I opened Android Studio, Photoshop, and Genshin Impact at same time since I need to get some information from the game and put it to Android Studio for my assignments.</p><p style="text-align: left;">There's only one annoyance I found however. When I restart the laptop or after a day, the game logged me out from my game account, forcing me to login again. This never happends in my old laptop and in my phone. Maybe it's just because I copied the game files from my old laptop and used the launcher "Locate Game Files" option instead of fully downloading the game from scratch. I'll update this section once I have solution.</p><p style="text-align: left;">UPDATE: Reinstalled the game at 1.5. Problem still occurs sadly.</p><p style="text-align: left;">UPDATE2: Disabling "HP Support Solutions Framework Report" task at "Task Scheduler/Helwett-Packard/HP Support Assistant" seems helped on the random logout. You'll still logged out when performing restart/full shutdown however.</p><p style="text-align: left;">UPDATE3: I'm currently "playing ping-pong" between their customer service to get this problem fixed. I ended up offering myself to assist fixing the issue. Not sure how it will goes (or maybe not).</p><p style="text-align: left;">UPDATE4: I ended up performing clean install and so far the problem no longer occurs, but there's no vendor-specific apps installed yet.<br /></p><h2 style="text-align: left;">Additional Notes</h2><ul style="text-align: left;"><li>The laptop screen support AMD FreeSync between 40 and 60 FPS.</li><li>You need to charge the stylus first.</li><li>Careful when opening the pen box. I accidently spilled two stylus heads inside the box because I opened the box wrong way. I found both back however.</li><li>It defaults to "Action Keys Mode". Pressing "F" function keys runs the respective action icon instead (i.e. pressing only F4 toggle the keyboard backlit). This can be changed in UEFI setting. </li><li>It comes with multi-port hub with USB Type-A, Type-C, and HDMI, one for each.</li><li>Anything on the keyboard won't function when you flip the laptop more than 180 degrees. This includes the power button and the fingerprint. This can be annoying when the screen turned off by Windows and it automatically locks.</li></ul><h2 style="text-align: left;">Conclusion</h2><p style="text-align: left;"> I think that's what you need to know. Whetever you want to get this laptop or not depends, but my suggestion is to wait for the Ryzen 5000 series<br /></p><p style="text-align: left;">... because those has SMT too.<br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-59385217196021186412021-02-10T11:35:00.002+08:002021-02-10T11:35:32.745+08:00Bad Apple in CMake: How it Works?<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/Ro92Qs0JLvg" width="320" youtube-src-id="Ro92Qs0JLvg"></iframe></div><p></p><p>There are already lots of Bad Apple videos played in various things. Start from <a href="https://www.youtube.com/watch?v=7AaMaeAOitI" target="_blank">Minecraft and Command Block</a>, <a href="https://www.youtube.com/watch?v=tO6sfku_1b8" target="_blank">Minecraft Sheep color</a>, and even <a href="https://github.com/leafyao8621/badapple/" target="_blank">in your terminal</a>. I'm interested with the latter, but the one I linked uses C. I don't think someone ever do this and I'm probably become the first one to do so. So I present you <a href="https://github.com/MikuAuahDark/BadAppleCMake/tree/2f40750" target="_blank">Bad Apple video player written in CMake</a>. This blog post will explain in-detail how it works, but before we dive deep into the CMake script, there are some things need to be noted.</p><ol style="text-align: left;"><li>CMake is slow. In my laptop, it can display the frames at 2.8 FPS. The video above is speed up.</li><li>I use braille unicode characters. This allows me to encode 2x4 pixels width in a single braille character. Also Wikipedia has extensive documentation of which bits to set to activate dots in braille characters.<br /></li><li>The videos are converted to <a href="https://en.wikipedia.org/wiki/Netpbm#PBM_example" target="_blank">PBM</a>. Why PBM? Because Bad Apple PV is black-and-white and because it's very simple to parse compared to PGM or PPM. As PBM packed all pixels to bits, where the MSB is the first pixel, this means it's possible to display 8x4 pixels of the image using only 4 braille characters.<br /></li></ol><p>Now, open the <a href="https://github.com/MikuAuahDark/BadAppleCMake/blob/2f4075038638302e46e64143c869eea84ef78cac/CMakeLists.txt" target="_blank">CMakeLists.txt</a> file as points below will be highlighted by line number.</p><ol style="text-align: left;"><li>The first few lines is not that important. They already explained in the comments. The "PRINTER_MODE" condition will be explained later, so just skip to line 138.</li><li>At line 138, I look for FFmpeg executable. FFmpeg is used to convert the Bad Apple video into PBM frames.</li><li>Line 142 to 158, I look for <a href="https://github.com/ytdl-org/youtube-dl" target="_blank">youtube-dl</a>. youtube-dl is used to download the Bad Apple PV from YouTube.</li><li>Line 163, download only the video. I use <a href="https://cmake.org/cmake/help/latest/command/add_custom_command.html" target="_blank">add_custom_command</a> so the downloaded video is marked as "generated" which allows it to have file-level dependency later on.</li><li>For line 169 to 183, there are 6572 frames that's marked as "generated". That 6572 files depends on 1 file, which is the video. This is why I said "file-level dependency". Notice at line 181 I added "VERBATIM" because I need to pass that percent sign as-is. (Reason: Windows doesn't like it)<br /></li><li>Then at the line 186, I define a target "BadApple". This will be the target that should be run and it depends on that 6572 frame files. Thus, the order of execution is: download the video, convert to 6572 frames, then run the specified command. If you inspect the command, it literally just run again CMake at the current source directory, but it sets the "PRINTER_MODE" to truth value and sets the "FRAME_PATH" variable, thus let's get back to line 22. You can quickly search up the "CMAKE_*" variables I use in the internet. It should be in the first page.<br /></li><li>Line 26 is the braille character that the CMake will use for display.</li><li>Line 51 is a function that takes 5 arguments: the result variable name, then followed by the value of a byte in the PBM. That function will shift the bits to fits with braille unicode encoding bits (see <a href="https://en.wikipedia.org/wiki/Braille_Patterns#Identifying,_naming_and_ordering" target="_blank">https://en.wikipedia.org/wiki/Braille_Patterns#Identifying,_naming_and_ordering</a> for more information). I think an image will explain it better.<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-vC5ICNHGhhM/YCKrt0-oyUI/AAAAAAAAAw4/cko3MoGAFEou7GUXwZP3IBtJ-G7MzTZnQCLcBGAsYHQ/s666/bad_apple_cmake_img_1.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="350" data-original-width="666" height="210" src="https://1.bp.blogspot.com/-vC5ICNHGhhM/YCKrt0-oyUI/AAAAAAAAAw4/cko3MoGAFEou7GUXwZP3IBtJ-G7MzTZnQCLcBGAsYHQ/w400-h210/bad_apple_cmake_img_1.png" width="400" /></a></div><br /></li><li>More explanation on line 51, that means BYTE1 is used to determine whetever dots 1 or 4 will be turned on. BYTE2 is used for dots 2 and 5. BYTE3 for dots 3 and 6. Finally, BYTE4 for dots 7 and 8. The formula is bit confusing to read due to parentheses and brackets everywhere. <br /></li><li>Line 63, a helper function that allows me to index an image byte by its X byte position and Y position. Note that it multiplies the index by 2 later on, that's because the whole PBM data is read as hex. <br /></li><li>Line 83 is escape character, building block of the ANSI escape codes of setting cursor position later on.</li><li>Line 85 is to loop all the 6572 frames and display it one by one.</li><li>Line 87, read the PBM image data directly <b>as hex</b> (because binary data). It's actually possible to calculate the offset by getting the string length of WIDTH and HEIGHT, but I'm lazy.</li><li>Line 90 to 95 is basically the "Playback" text which displays the current second and minute of the video. Bad Apple PV runs at 30 FPS so 30 FPS is assumed.</li><li>Line 99 and 102 is to loop over every image pixels (except at x axis where we process 8 pixels at a time).</li><li>Line 104, now the image at point 8 makes sense. BYTE1 through BYTE4 contains the value of the packed image pixels,</li><li>Finally at line 119, it prints the PBM image data as braille characters using CMake's <a href="https://cmake.org/cmake/help/latest/command/message.html" target="_blank">message</a> function and sets the cursor up, ready to be overwritten by the next braille characters.<br /></li><li>And at last, at line 124, it sets the cursor to the end and finally the function finishes.</li></ol><p>Fun fact: I cut and speed up the videos only with FFmpeg since Kdenlive encoding options result in poor video quality (and my laptop native resolution is only 720p). The total recording duration of the video is 41 minutes and 4 seconds</p><p>... or equivalent to <a href="https://youtu.be/FtutLA63Cp8" target="_blank">Bad Apple PV</a> played 11 times plus quarter.</p><p><br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-1565613744437861162021-01-03T12:38:00.000+08:002021-01-03T12:38:01.446+08:00What To Do in Case Hyper-V "Default Switch" is Missing or Disappear?<p>Starting my laptop today, I noticed that in my Task Manager, the "Default Switch" adapter has disappeared from the network adapters. Then I noticed that I can't ssh to the Hyper-V VM I setup few days earlier. This quite puzzled me because as turns out the <a href="https://github.com/yuk7/ArchWSL" target="_blank">ArchWSL</a> I setup with WSL2 also failed to start, and <a href="https://social.technet.microsoft.com/Forums/Windows/en-US/b6855b2f-0e9e-45ab-a0c8-043c5b4c0a5a/hypervvmswitch-flooding-event-log?forum=win10itprovirt" target="_blank">this error</a> flooded my event viewer. As a side note, the VM starts fine, but it doesn't have any network at all.<br /></p><p>Tried to <a href="https://social.technet.microsoft.com/Forums/en-US/8bcd0616-a024-4ada-92e3-e53a6d91e01d/hyperv-default-switch-missing?forum=win10itprovirt" target="_blank">delete Hyper-V feature and reinstalling it back</a>, issue still persist. sfc /scannow, "Windows protection did not find any integrity violations". So one best guess is that somehow removing Hyper-V doesn't remove all the network switches configuration. Time to do something dangerous.</p><p><b>Strong Notice: Steps below is dangerous to do. Consider creating system restore point prior performing steps I mentioned below. You'll also lose all your Hyper-V VMs (and all Virtual Switch configurations) so ensure you have exported your VM beforehand.</b><br /></p><ol style="text-align: left;"><li>Be prepared for lots of reboots. <br /></li><li>Uninstall Hyper-V and all its related components. In optionalfeatures.exe, untick Hyper-V, Virtual Machine Platform, and Windows Hypervisor Platform. Unticking Windows Subsystem for Linux is not needed (despite using Hyper-V for WSL2). Then, click reboot.<br /></li><li>Open regedit.exe, then navigate to HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\vmsmp\parameters\SwitchList and delete all subkeys there. Same goes to NicList above SwitchList, remove all subkeys.</li><li>Reboot</li><li>Repeat step 3 until there are no more subkeys.</li><li>If you deleted some values in step 5, reboot.<br /></li><li>Go to C:\ProgramData\Microsoft\Windows\Hyper-V, you may need to take the ownership of that folder, don't worry. Windows Explorer will ask you to do so in 1 click of a button.</li><li>Delete everything in that folder.</li><li>Reboot</li><li>Repeat step 3 and 5 again<br /></li><li>Do the reverse of step 2, including the reboots.<br /></li><li>After the system boots up for a while, open elevated powershell <b>and</b> command prompt. In the elevated powershell, type "Get-VMSwitch". This one should hang, don't worry. Then in the elevated command prompt, execute "sc start hns".</li><li>Wait for a moment, then you should see "Default Switch" back in Task Manager. And the elevated powershell window should return values (no longer hangs).</li><li>Reconfigure your network switch if needed</li><li>Import back your VMs.</li></ol><p>I don't quite remember the steps but that's what I've been done. Now WSL2 starts again and I cna ssh back to the VMs after I imported it. </p><p>What a blog post to start 2021</p><p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://i.kym-cdn.com/photos/images/original/001/918/696/90d.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="476" data-original-width="591" height="258" src="https://i.kym-cdn.com/photos/images/original/001/918/696/90d.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">https://knowyourmeme.com/photos/1918696-disappointed-black-guy</td></tr></tbody></table></p><p>... unless it shows 34 December 2020 instead</p><p><br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-41411573069586732352020-11-29T00:26:00.001+08:002020-11-29T00:30:26.363+08:00LÖVE on Windows 10 ARM64 Part 3: We're Going ARM64<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://media.discordapp.net/attachments/474705430434807819/781802932692975646/unknown.png?width=516&height=425" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="425" data-original-width="516" height="329" src="https://media.discordapp.net/attachments/474705430434807819/781802932692975646/unknown.png?width=516&height=425" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">LÖVE 12.0-development branch running in Windows 10 ARM64 under QEMU.</td></tr></tbody></table><p>This is the moment of truth, and this is will be the last part of my LÖVE on Windows 10 ARM64 blog post series (<a href="https://auahdark687291.blogspot.com/2020/11/love-on-windows-10-arm64-part-1.html">part 1 here</a>, <a href="https://auahdark687291.blogspot.com/2020/11/love-on-windows-10-arm64-part-2-one.html" target="_blank">part 2 here</a>). In short, it's possible.</p><p>The long answer however, require various patches. Most of the patches went into external libraries that LÖVE needed. The timeline of these is simply by <a href="https://github.com/MikuAuahDark/love-megasource/commits/windows-arm" target="_blank">these recent commits</a>, but an explanation each of them are as follows:<br /></p><p>The first thing to do is to update SDL to 2.0.12, <a href="https://github.com/MikuAuahDark/love-megasource/commit/96f6ca1b4bdff31dc0a69f6ad3c14eea0c033686" target="_blank">apply Megasource-specific patches</a>, <a href="https://hg.libsdl.org/SDL/rev/d5fe4ad4d29c" target="_blank">then apply this patch</a> so it compiles under VS2019 because screw MSVC generating calls to memset and memcpy <a href="https://stackoverflow.com/a/48679987">even when you specify /NODEFAULTLIB</a>! Next is to update OpenAL-soft up to the <a href="https://github.com/kcat/openal-soft/commit/d86046d522f45804e28462db0c0e8e1a34a1cfe7" target="_blank">recent commit</a> for MSVC ARM64 support (see my previous blog post, I mentioned about OpenAL-soft there) then <a href="https://github.com/MikuAuahDark/love-megasource/commit/1f60a63de8dbd729ae8aa96b55ff5aa0537ab831" target="_blank">re-apply Megasource-specific patches</a>.</p><p>The next library is bit tough and this is where most of my time spent. Ogg <a href="https://auahdark687291.blogspot.com/2020/05/container-vs-codec.html" target="_blank">and</a> Vorbis is the most annoying one that I start to suspect this is CMake bug (I use 3.19.1, the latest as of writing). I updated Ogg and Vorbis to 1.3.4 and 1.3.7 respectively, and for some reason I got error that reads "cannot open input file 'ogg.obj'" for "liblove" and "megatest" targets. Looking at the project configuration using Visual Studio shows that"ogg" is referenced twice in those both targets. I unfortunately went to last resort by using <a href="https://github.com/MikuAuahDark/love-megasource/commit/4d8ef81f6b8e5c0ffa8f1021a16b99380ad1a2dc" target="_blank">Megasource-provided CMake</a> for both projects and the problem went away. To be honest, I have no idea why that happends, and I'm kind of sure that using Megasource-provided CMake may gives inferior performance because it uses generic, non-architecture-dependent code. Anyway it's solved so let's move on.</p><p>The last change is to <a href="https://github.com/MikuAuahDark/love-megasource/commit/2c80e8a2cd9968282298518dc31f61a9e898015d" target="_blank">Megasource CMakeLists.txt</a> itself. There are various changes there that needs to be explained (I'll be using green-highlighted line number).</p><ul style="text-align: left;"><li>Line 20: Add variable for detecting ARM64 compilation. Currently, it only works for MSVC + Visual Studio targets but this is sufficient for my needs at the moment.</li><li>Line 155 and line 171: Unfortunately, as of CMake 3.19.1, their <a href="https://cmake.org/cmake/help/v3.19/module/InstallRequiredSystemLibraries.html" target="_blank">InstallRequiredSystemLibraries</a> module doesn't support MSVC ARM64 and it will pick x64/AMD64 DLLs instead, so those lines will supress copying the MSVC redistributable libs when compiling for MSVC ARM64.</li><li>Line 242: SDL will try to load OpenGL32 in Windows first then trying other backends. This gives me bit puzzle when prototyping my patches because even setting <a href="https://github.com/love2d/love/blob/975cadf66b20284a6e239a9f74ef8b65067f3110/src/modules/window/sdl/Window.cpp#L243" target="_blank">LOVE_GRAPHICS_USE_OPENGLES=1</a>, <span class="pl-s"><span class="pl-pds"></span><a href="https://github.com/spurious/SDL-mirror/blob/9337afd6dbabeb2125b7ff5b638083d58fce5fc1/include/SDL_hints.h#L67-L85" target="_blank">SDL_RENDER_DRIVER</a><span class="pl-pds"><a href="https://github.com/spurious/SDL-mirror/blob/9337afd6dbabeb2125b7ff5b638083d58fce5fc1/include/SDL_hints.h#L67-L85" target="_blank">=opengles</a>, and </span></span><br /><span class="pl-s"><span class="pl-pds"><span class="pl-s"><span class="pl-pds"></span><a href="https://github.com/spurious/SDL-mirror/blob/9337afd6dbabeb2125b7ff5b638083d58fce5fc1/include/SDL_hints.h#L1202-L1231" target="_blank">SDL_OPENGL_ES_DRIVER</a><span class="pl-pds"><a href="https://github.com/spurious/SDL-mirror/blob/9337afd6dbabeb2125b7ff5b638083d58fce5fc1/include/SDL_hints.h#L1202-L1231" target="_blank">=1</a> environment variable has no effect, so I went into last resort and tell SDL not to compile the OpenGL backend instead. This is fine, LOVE will run using OpenGLES codepath using ANGLE.<br /><br /></span><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://media.discordapp.net/attachments/329404808643608586/781796052612284416/unknown.png?width=492&height=425" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="425" data-original-width="492" height="346" src="https://media.discordapp.net/attachments/329404808643608586/781796052612284416/unknown.png?width=492&height=425" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">SDL fails to find OpenGL32.dll in Windows 10 ARM<br /></td></tr></tbody></table></span></span></span></li><li><span class="pl-s"><span class="pl-pds"><span class="pl-s"><span class="pl-pds"></span></span></span></span>Line 270 and line 332: <a href="https://github.com/LuaJIT/LuaJIT/issues/593" target="_blank">LuaJIT doesn't support Windows 10 ARM64 yet</a>, so Lua 5.1.5 bundled with the Megasource must be used. This is actually a performance impact but if even the LuaJIT interpreter can't compile (let alone the JIT compiler) then it's impossible to use LuaJIT there unless Mike adds support for it. This also increases fragmentaton because people who uses LÖVE are used to bitwise library provided by LuaJIT, but a possible fix for this is to bundle <a href="https://bitop.luajit.org/" target="_blank">LuaJIT's LuaBitOp</a> within Lua 5.1.5 or LOVE (when LuaJIT is not used).</li></ul><p>After applying patches to Megasource, now patches in LÖVE are <a href="https://github.com/MikuAuahDark/love2d/commits/windows-arm-ownmega" target="_blank">as follows, and mostly related to its buildsystem and dependencies instead</a>:</p><p>First, <a href="https://github.com/MikuAuahDark/love2d/commit/b463ecf939f4bd437d8118c55a657884aa87b17f" target="_blank">tell LÖVE not to link to OpenGL</a>. While Windows 10 SDK for ARM64 provides OpenGL headers, it doesn't include OpenGL library which cause link errors in later step. This is fine and there are no noticeable problems whatsover (even in Windows x64 builds) because LÖVE will use SDL to load OpenGL(ES) functions anyway.</p><p>The next is <a href="https://github.com/MikuAuahDark/love2d/commit/e11f6d83f25c47fd67e0a94fd341612c93dc9197" target="_blank">PhysFS, which is easy fix</a>, and I have plan to report that later on. <a href="https://github.com/MikuAuahDark/love2d/commit/daeb12e7dcf772c6225ecc75296efa637a403883" target="_blank">The last problem is dr_mp3 used in LOVE 12.0</a>. dr_mp3 and dr_flac doesn't expect this compiler and platform combination so <a href="https://github.com/mackron/dr_libs/issues/169" target="_blank">I have to report this problem to upstream</a> for a proper fix (which for dr_flac, doesn't compile error so that one can wait). This is why this blog post is slightly delayed.</p><p>Afterwards, LÖVE will compile and you can install and push it to your Windows 10 ARM64 machine (or QEMU) and see LÖVE runs there. Currently I can't provide binaries at the moment because the automated GitHub Actions that's supposed to compile LÖVE binaries via artifacts is failing for some reason and quick search shows that it's OpenAL-soft to blame. So an update to prebuilt binaries will probably come in another blog post or edits in this blog post instead.<br /></p><p>Speaking of ARM64, someone also managed to compile LÖVE for Apple Silicon</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/--qablY-l-ws/X8J5E5VfmrI/AAAAAAAAAts/4wSL58U744g7-V3u0vpcYi0Hm364fqftgCLcBGAsYHQ/s571/Cuplikan%2Blayar%2B2020-11-29%2B002117.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="311" data-original-width="571" height="217" src="https://1.bp.blogspot.com/--qablY-l-ws/X8J5E5VfmrI/AAAAAAAAAts/4wSL58U744g7-V3u0vpcYi0Hm364fqftgCLcBGAsYHQ/w400-h217/Cuplikan%2Blayar%2B2020-11-29%2B002117.png" width="400" /></a></div><br /> ... which is related to this blog post title.<br /><p></p><p></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-10465293733117240012020-11-19T19:53:00.002+08:002020-11-19T19:53:53.651+08:00LÖVE on Windows 10 ARM64 Part 2: One Step at A Time<p> <table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://user-images.githubusercontent.com/7500438/99642011-f7b84e00-2a85-11eb-9948-99399763ea32.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="419" data-original-width="800" height="210" src="https://user-images.githubusercontent.com/7500438/99642011-f7b84e00-2a85-11eb-9948-99399763ea32.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">OpenAL-soft test program <a href="https://gist.github.com/MikuAuahDark/9d6dc0de1d46881d9c2d1a3a7da6d98a" target="_blank">I wrote</a> <a href="https://www.youtube.com/watch?v=bAkJcaMlc1E" target="_blank">running in Windows 10 ARM64 under QEMU</a><br /></td></tr></tbody></table></p><p>So I have few news that I want to share. First, <a href="https://github.com/kcat/openal-soft/issues/494" target="_blank">OpenAL-soft now complies for Windows 10 ARM64</a>. LÖVE depends on OpenAL for high performance audio backend (even the <a href="https://github.com/kcat/openal-soft" target="_blank">-soft version of OpenAL developed by kcat</a> is fast enough), so getting this compiles for Windows 10 ARM64 means one step forward to bring LÖVE on Windows 10 ARM64. And the last is <a href="https://devblogs.microsoft.com/directx/announcing-the-opencl-and-opengl-compatibility-pack-for-windows-10-on-arm/" target="_blank">Microsoft released OpenGL driver built on-top of Direct3D 12 for Windows 10 ARM</a>. This probably sounds that my effort of making weekly binaries of ANGLE using GitHub Actions is useless but actually no. Compiling ANGLE is bit data-intensive so people will just prefer getting binaries.</p><p>Since I've successfully able to run Windows 10 ARM64 under QEMU, this blog post will wrote about it.</p><p>The first thing that comes to my mind when installing this is picking good tutorials. <a href="https://www.withinrafael.com/2018/02/12/boot-arm64-builds-of-windows-10-in-qemu/" target="_blank">I found this tutorial (click here)</a> is good as it provides all the necessary drivers, firmware, and EFIvars needed, although I have to source the .iso myself, but I already did that beforehand.</p><p>As the tutorial said, it's <b>slow as dirt at i7-4770K</b>, so it's safe to assume that it's even slower than dirt at my laptop, i5-7200U (okay it has dGPU but it's irrelevant for this). I had to do the setup multiple times with multiple strategies just to make it get past OOBE and I think I killed around 120GB worth of SSD writes because this.</p><p>My first method is basically naive method. Automatic graphic installer, then proceed as Microsoft intended. This doesn't go well because I can't get past OOBE and then it just automatically restart then stuck in restart loop (which forces me to redo the install again). I retried this multiple times with different CPU (from 2 to 4 CPU) and RAM (from 2 to 4GB) configuration with no avail, so my conclusion is I need to look for other methods.</p><p>Because my problem is related to OOBE, it makes me wonder if I can somehow skipped it. I don't quite remember the whole progress of this, but what I remember is I installed Windows 10 kinda "manually". I searched on YouTube about Windows 10 hacks by Enderman and found an interesting video of installing Windows 10. Basically instead of installing via GUI, just install it from command prompt.</p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/JxJ6a-PY1KA" width="320" youtube-src-id="JxJ6a-PY1KA"></iframe></div><p></p><p>First time I tried the method, it result in unbootable OS (blue screen with INACCESSIBLE_BOOT_DEVICE). That's because the "manual" install method assume no additional drivers needed to detect the disk. Unfortunately in my case, I need to add RedHat VirtIO driver prior installing so it can detect the disk (see tutorial link above at step 9). Well even <a href="https://superuser.com/a/1177806" target="_blank">when I installed the drivers manually via DISM</a>, it still result in endless boot loading icon, so I erased the whole install again.</p><p>Finally, the method I found to be working is this. First, install the Windows 10 as usual (from the ISO). After the first-phase install finished, I boot back to the setup ISO then open command prompt there (Shift+F10). At this command prompt, I installed the RedHat VirtIO with <a href="https://superuser.com/a/531819" target="_blank">"pnputil" command</a>, then mount the partition back with diskpart. Afterwards, I just follow along the YouTube video above starting at 0:56 (<b>no need to type the bcdboot command!</b>). For the last part, when the video says wait for 5 minutes, I actually waited for hours. First attempt of this failed so had to redo again, but then I found it works when I simply "reset" the emulator just when it shows the login screen.</p><p>Afterwards, jsut tick the privacy thingy then you're ready to use it. You need to alter the registry again to kill OneDrive setup tho because running x86-emulated binaries there is too slow to the point that installing VS2019 redistributable is impossible because no window pops up (let alone OneDrive setup). For additional performance, <a href="https://kitsunemimi.pw/notes/posts/running-windows-10-for-arm64-in-a-qemu-virtual-machine.html#after-installation" target="_blank">this tips (click here) really helps alot</a> on making sure the CPU stays idle and not spin at 100% continously.</p><p>That's how I got Windows 10 ARM64 running under QEMU in my laptop. And of course I'll continue this blog post when I have progress.</p><p>... and if you want to read the previous blog post, please <a href="https://auahdark687291.blogspot.com/2020/11/love-on-windows-10-arm64-part-1.html" target="_blank">click here</a>.<br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-1338836356806550572020-11-16T13:01:00.005+08:002020-11-16T13:01:34.692+08:00LÖVE on Windows 10 ARM64 Part 1: Compiling Dependencies<p>ARM64 laptops probably start getting its hand on like 2021 or 2022 (wild guess). As one of LÖVE development team, this makes me wonder if I can run LÖVE under Windows 10 ARM64, natively, because using x86 emulation is cheating and doesn't work anyway due to LÖVE dependency to OpenGL.</p><p>First thing I have to think is:</p><ol style="text-align: left;"><li>Get Windows 10 ARM64 toolchain. This is easy. Visual Studio Installer provide this.</li><li>Get <a href="https://www.qemu.org/download/" target="_blank">QEMU</a>.</li><li>Get Windows 10 ARM64 image.</li></ol><p>Fact: I have to go to my friend house and ask to use his WiFi to download those because I'm poor. </p><p>Quick testing the ARM64 toolchain by compiling my infamous <a href="https://github.com/MikuAuahDark/HonokaMiku" target="_blank">HonokaMiku</a> with CMake shows the ARM64 cross compiler works. Now before compiling LÖVE, these things comes to mind prior trying to compile it:<br /></p><ol style="text-align: left;"><li><a href="https://github.com/LuaJIT/LuaJIT/issues/593" target="_blank">LuaJIT doesn't support Windows/ARM64 combination</a>.</li><li>OpenGL is not supported. </li></ol><p>Point 1 is kinda bummer because even LuaJIT interpreter is waaaayyy faster than Lua 5.1.5, but I have no choice other than adding -DLOVE_JIT=0 CMake flags later on. Point 2 can be solved by using <a href="https://chromium.googlesource.com/angle/angle" target="_blank">ANGLE</a>. I have to "brute-force" <a href="https://github.com/MikuAuahDark/angle-winbuild" target="_blank">GitHub Actions to get ANGLE compiled for ARM64</a>, but this pays off and the actions build script can be extended to other architecture later as well.</p><p>When trying to compiling it, unfortunately SDL, PhysFS, Vorbis, and OpenAL-soft fails to compile. I've filled <a href="https://github.com/kcat/openal-soft/issues/494" target="_blank">issue tracker for OpenAL-soft</a> but SDL (and others) probably needs to wait longer as I also need to attend online classes (univ related), thus slowing down this experiment as whole. Known libs to compile are FreeType, GLSLang, zlib, and Ogg. Others are unknown as MSBuild prevents me compiling other dependencies.</p><p>For the VM part, I haven't getting myself a way to pass OOBE in Windows 10 ARM64 running under QEMU. Mind you, it's slower than dirt in i5-7200U. Blogpost about setting up the VM need to wait.<br /></p><p>Also this is me trying to get LÖVE compile with Windows 10 ARM64 toolchain:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-tPJMWykjEi8/X7IHXfoR2RI/AAAAAAAAAs4/0gZ_XIdzIac86JJTfjbQzEm_395FeTuEACLcBGAsYHQ/s978/temporary3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Lumine is underrated." border="0" data-original-height="848" data-original-width="978" height="277" src="https://1.bp.blogspot.com/-tPJMWykjEi8/X7IHXfoR2RI/AAAAAAAAAs4/0gZ_XIdzIac86JJTfjbQzEm_395FeTuEACLcBGAsYHQ/w320-h277/temporary3.jpg" width="320" /></a></div>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-88875638437355638172020-09-13T21:41:00.005+08:002020-09-13T21:43:37.323+08:00PHP Unix Socket under Windows<p>It's been a while that Windows 10 have support of <a href="https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/" target="_blank">Unix Domain Socket</a>/UDS (although with limitation but that doesn't matter for this blog). While PHP language has some questionable design decisions, let me tell you something.</p><p>PHP already prepared for AF_UNIX support in Windows older than you think.</p><p>Checking the PHP source code, the earliest presence of sockaddr_un struct in the code existed <a href="https://github.com/php/php-src/commit/9820c2a5af9d5a91da02f13a0f704ec4ada1ed15#diff-9222a097b0ada93f8fe32c4f0574366a" target="_blank">since PHP 4.1.0</a>. First let's test whetever UDS is supported in PHP 7.4. The server code<br /></p><script src="https://gist.github.com/MikuAuahDark/750fd063a7ca4093e9b53dfe9dffadcb.js?file=uds_server.php"></script><p>In my machine, that code runs (no error message)</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_server.php.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="143" data-original-width="534" src="https://mikuauahdark.github.io/uds/uds_server.php.png" /></a> <br /></div><p></p><p>So, the client code:</p><script src="https://gist.github.com/MikuAuahDark/750fd063a7ca4093e9b53dfe9dffadcb.js?file=uds_client.php"></script><p>Running the client code prints the following message.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_client.php.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="163" data-original-width="559" src="https://mikuauahdark.github.io/uds/uds_client.php.png" /></a></div><p></p><p>That means UDS is working. "Wait, that's probably PHP completely emulating UDS in Windows", fine, let's write the client code with C instead!</p><script src="https://gist.github.com/MikuAuahDark/750fd063a7ca4093e9b53dfe9dffadcb.js?file=uds_client.c"></script><p>Compiling that code and running it shows this output.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_client.c.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="143" data-original-width="578" src="https://mikuauahdark.github.io/uds/uds_client.c.png" /></a> <br /></div><p></p><p>That also means PHP didn't emulate UDS and uses WinSock directly.</p><p>So, UDS works in PHP 7.4.1 under Windows. What about earlier version? It should be, right? Since reference to UDS in Windows existed since PHP 4.1.0. So let's try with PHP 5.3.19.</p><p>Note: The reason I choose PHP 5.3.19 because I didn't look at older commits that reference sockaddr_un in Windows, and I just see that it's there since 4.1.0 while writing this blog, after doing all testing.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_server.php5.3.19.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="146" data-original-width="624" src="https://mikuauahdark.github.io/uds/uds_server.php5.3.19.png" /></a> <br /></div><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_client.php5.3.19.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="152" data-original-width="649" src="https://mikuauahdark.github.io/uds/uds_client.php5.3.19.png" /></a> <br /></div><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://mikuauahdark.github.io/uds/uds_client2.c.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="96" data-original-width="317" src="https://mikuauahdark.github.io/uds/uds_client2.c.png" /></a></div>The reverse is also possible, i.e. PHP 7.4.1 connecting to UDS which was created for PHP 5.3.19 and vice versa, but that will clutter this blog post with images.<br /><p>This is surprising discovery in my opinion. What if PHP team already predicted this blog post after all? Well ... </p><p>That's story for another blog post.</p><p><br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-52705027629804027502020-08-14T10:30:00.002+08:002020-08-14T10:30:26.770+08:00Girls Band Party! PICO ~Extra Large~ Episode 15 Bodyswap Explained<div style="text-align: center;">
<span style="font-size: large;">Before you read the rest of the post, please watch the video.</span></div>
<div style="text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/J046nxouL1M" width="320" youtube-src-id="J046nxouL1M"></iframe></div>
<br />
Okay based on that video, we can see that all members of Roselia is switching bodies to each other in a game called Neo Fantasy Online, as they said in the end. This blog post will explain who swap to which one using only subtitle and visual hints.<br /><ol style="text-align: left;"><li>Duration 0:41, it's very unusual to see <u>Yukina</u> have that kind of expression. Furthermore, at 0:49, <u>Yukina</u> calls <u>Lisa</u> "Rin-rin" for some reason. From this, we can conclude that <b>Ako plays as Yukina</b>. This is proven at 2:16 where Ako plays and seeing <u>Yukina</u>'s body.<br /></li><li>At duration 1:28, it's absolutely unusual to see <u>Ako</u> with that expression. Her eye shape somewhat resemble Yukina. Also she wants to throw a rare item (implied by <u>Yukina</u> "NO" response). This means,<b> Yukina plays as Ako</b>.</li><li>0:53, <u>Lisa</u> asks <u>Sayo</u> to take action, but <u>Rinko</u> response instead. This means <b>Sayo plays as Rinko</b>.</li><li>Same duration, 0:53, if you look closely to <u>Sayo</u>'s expression, she tries to mimic <u>Lisa</u>'s cat mouth. Also at 1:08, <u>Sayo</u>'s behavior is somewhat absurd, which means <b>Lisa plas as Sayo</b>.</li><li>This will be self-explanatory From all 4 points above, <b>Rinko plays as Lisa</b>. This is more proven at 1:00 when <u>Lisa</u> sorry to <u>Rinko</u>.<br /></li></ol><p>If you still not understand, see this table.</p>
<table border="1">
<thead>
<tr>
<th>Player</th>
<th><img alt="Yukina" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/7/74/Yukina_(icon).png" width="80" /></th>
<th><img alt="Sayo" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/5/51/Sayo_(icon).png" width="80" /></th>
<th><img alt="Lisa" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/2/2f/Lisa_(icon).png" width="80" /></th>
<th><img alt="Ako" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/7/73/Ako_(icon).png" width="80" /></th>
<th><img alt="Rinko" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/c/c5/Rinko_(icon).png" width="80" /></th>
</tr>
</thead>
<tbody>
<tr>
<td>Character</td>
<td><img alt="Ako" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/7/73/Ako_(icon).png" width="80" /></td>
<td><img alt="Rin-rin" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/c/c5/Rinko_(icon).png" width="80" /></td>
<td><img alt="Sayo" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/5/51/Sayo_(icon).png" width="80" /></td>
<td><img alt="Yukina" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/7/74/Yukina_(icon).png" width="80" /></td>
<td><img alt="Lisa" height="80" src="https://vignette.wikia.nocookie.net/bandori/images/2/2f/Lisa_(icon).png" width="80" /></td>
</tr>
</tbody>
</table>
<p>If you notice, this is similar to my previous, but older blog post about body swap which you can read <a href="https://auahdark687291.blogspot.com/2018/10/girls-band-party-pico-episode-16.html">here</a>. <br /></p><p>Also while playing RPG, make sure to manage your inventory frequently before it's too late. Otherwise<br /></p><p>... your inventory may be full when you jsut found a rare herb, and it's hard to manage your inventory at that point.<br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-68806729678413535002020-05-17T14:17:00.004+08:002021-05-08T13:33:56.331+08:00FL Studio and FFmpeg LibrariesDid you know that under the hood, FL Studio uses FFmpeg for some of their operations? For example, the <a href="https://www.image-line.com/support/flstudio_online_manual/html/plugins/Fruity%20Video%20Player.htm" target="_blank">Fruity Video Player</a> plugin uses FFmpeg to load wide array of video codecs. And <a href="https://www.image-line.com/support/flstudio_online_manual/html/plugins/ZGameEditor%20Visualizer.htm" target="_blank">ZGameEditor Visualizer</a> export function uses FFmpeg libraries for its video encoding.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Is this good or bad? Both. It's good because FFmpeg is the mother of codecs and formats, so it can decode lots of audio and video formats. But it's bad because FL Studio's bundled FFmpeg libraries are LGPL which lack some video encoder like x264. This causes problem where ZGameEditor Visualizer plugin lossy encoding option exports to H263 when using .mkv.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Note: ZGameEditor Visualizer seems always export to PNG in MP4 if you choose MP4 regardless of the "Uncompressed video" option.<br />
<br />
So is it possible to change the old H263 to H264? Yes. LGPL software must be dynamically linked so it's user-replaceable, and fortunate for us, FL Studio dynamically links to FFmpeg. So how to replace the libraries? Just follow the steps below. Note that I assume 64-bit version of Windows and you're using 64-bit version of FL Studio executable.<br />
<ol>
<li>Make sure FL Studio is not running. </li>
<li>Download <a href="https://www.gyan.dev/ffmpeg/builds/" target="_blank">64-bit Windows FFmpeg shared binaries (release essential or release full)</a>. If you already have it downloaded before, you can use that. Just make sure it's latest version.</li>
<li>Navigate to "%ProgramFiles(x86)%\Image-Line\Shared\ffmpeg\x64" and replace all the DLLs (except ILVideo_x64.dll) with the DLLs from the downloaded zip. Just the DLLs, not the exes too. <b>It's good idea to backup all the DLL files there, just in case.</b></li>
</ol><p><b>UPDATE</b>: If you come here before, you'll notice I changed the link. This is because Zeranoe no longer provides FFmpeg binaries. <br /></p><p> Now start the FL Studio again, export to .mkv and it will use H264 (x264).<br />
<br />
So what about other video codecs? Unfortunately FL Studio specialize MP4 extension and forces PNG in MP4 when used. But from my experiment, here are list of possible extension to use and the resulting video codec:<br />
</p><ul>
<li>.mkv - H264 video codec (high profile) and Vorbis audio codec </li>
<li>.webm - VP9 video codec and Opus audio codec</li>
<li>.ogv - Theora video codec and Vorbis audio codec</li>
<li>.nut - MPEG4 video codec and AAC audio codec</li>
</ul>
Any other benefits of replacing the DLLs? Not sure if this another feature is caused by replacing DLL, but you can basically load any audio as long as the extension is one of FL Studio can recognize. Sadly that means you have to change the extension to .wav, .flac, .wv, .ogg, or .mp3 first before loading it to FL Studio.<br />
<br />
I just hope the ZGameEditor Visualizer export function has more option of controlling the video output when using custom presets. I'd love that.<br />
<ul>
</ul>
<b></b>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com9tag:blogger.com,1999:blog-880306883673298649.post-91980413610255074602020-05-14T00:03:00.000+08:002020-05-14T00:12:06.456+08:00Container vs. CodecOften people misinterpret container as video format. Here's one simple scenario.<br />
<br />
Say, there's A who have very old phone, released in 2007, and B who have smartphone released in 2019.<br />
A: Can you send me that MP4 video in your smartphone<br />
B: Sure.<br />
A: Why it doesn't play in my phone? The phone says it supports MP4 video format but it can't play your video. Your video is bad and so do you.<br />
<br />
So let's make this clear. First of all there are no such thing as "MP4 video format". Keep your questions for later. In the world of multimedia files such as video and audio, you have to know 3 things:<br />
<ol>
<li>Container file</li>
<li>Audio codec</li>
<li>Video codec</li>
</ol>
Container is a file that stores information how the video and the audio are stored inside the file. Sometimes also subtitles and additional files. Container also has information about what decoder should be used to decode the video and the audio and also contains when those video and audio should be decoded and presented to user. Here are some popular container file:<br />
<ol>
<li>MP4. Yes MP4 is a container, not a video format. Now you know!</li>
<li>WebM.</li>
<li>Matroska. Also known as MKV. The mother of container as it supports <i>almost</i> every codec in existence.</li>
</ol>
So, if MP4 is not video format, what are actually video formats? That's what called video codecs. It represents the data about how the video are encoded, basically video codec is what important about compatibility. Even if, say I use MP4 which was supported in A's phone above, but I use more recent codec, then A's phone won't able to play the video, despite being MP4.<br />
<br />
So what are kinds of video codecs?<br />
<ol>
<li>MPEG4, XviD/DivX family goes here. A very old codec.</li>
<li>H264, most popular codecs since smartphone existed.</li>
<li>H265, recent codec which provides smaller size and better quality.</li>
<li>VP9, royalty-free codec by Google which compete with H264.</li>
<li>AV1, royalty-free codec various vendors which compete with H265.</li>
</ol>
PS: "royalty-free" term doesn't mean anything from user perspective. It only matters from developer perspective. <br />
<br />
For scenario above, if B's video contains H265 codec, then A's phone won't able to play it, even if the video itself is inside MP4 container file. Now everything makes sense, right?<br />
<br />
Then there's also audio codec. Watching silent video is not very fun right? Then here comes the audio codec. The definition is same as video codec above, but for audio instead. There's only one difference, most audio codecs can be extracted out of their container, being standalone file. That's not possible for video codec.<br />
<br />
So, list of audio codec please? Okay.<br />
<ol>
<li>AAC. Its standalone file extension is .aac (actually MPEG ADTS). Can be placed inside MP4 container.</li>
<li>Opus. Must be placed in Ogg or WebM container.</li>
<li>Vorbis. Must be placed in Ogg container. </li>
<li>FLAC. Its standalone file extension is .flac. Can be placed inside Ogg container.</li>
<li>MP3. Its standalone file extension is .mp3. Can be placed inside MP4 container.</li>
</ol>
Whoa, hang on, so Ogg is not an audio format? Yes. Ogg is also a container. It can contain Theora video codec.<br />
<br />
So the conclusion is, if the video file is in extension that you know, that doesn't guarantee your device can play it. Like, you feel your device is superior because it can decode <a href="https://www.youtube.com/playlist?list=PLyqf6gJt7KuHBmeVzZteZUlNUQAVLwrZS" target="_blank">AV1</a> in MP4 until <a href="https://www.image-line.com/support/flstudio_online_manual/html/plugins/ZGameEditor%20Visualizer.htm" target="_blank">FL Studio's ZGameEditor Visualizer</a> lossless video export function writes <a href="https://en.wikipedia.org/wiki/Portable_Network_Graphics" target="_blank">PNG</a> image inside MP4.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-81954798940430174422020-01-12T21:21:00.002+08:002020-01-12T21:21:34.657+08:00Know Your Lua ErrorsOne problem when trying to help people with their Lua problem is that they don't know how to read Lua errors. You can be 100% sure that saying "it doesn't work" is absolutely not helpful at all. So I'll tell you the ways to read Lua errors properly and what it means.<br />
<br />
In this post, I used "<>" symbol to denote something arbitrary. Start by reading the syntax of Lua error. Lua error is usually have syntax like this:<br />
<br />
<pre><filename>:<lineNumber>: <message>
</pre>
<br />
<filename> is the script file. <lineNumber> is line number where it occured. That's very easy to remember, right? Now for the error message. The error message is usually very descriptive but sometimes it doesn't really tell you what exactly is wrong, so here are some lists of Lua errors that I'm aware of. If your Lua error is unlisted, that can mean I didn't add it yet or it's thrown by external program.<br />
<ol>
<li><b>attempt to call global '<name>' (a <non-function> value)</b><br />This caused when you're trying to call a global variable called '<name>' but '<name>' is not a function type. Example<br />
<pre>print() -- works
table() -- attempt to call global 'table' (a table value)
_VERSION() -- attempt to call global '_VERSION' (a string value)
anilvalue() -- attempt to call global 'anilvalue' (a nil value)
</pre>
</li>
<li><b>attempt to call field '<field>' (a <non-function> value)</b><br />Similar to above, but this occurs when you try to call something within a table.<br />
<pre>-- Note that 'math' is a global variable which is table
math.abs(123) -- works
math.pi() -- attempt to call field 'pi' (a number value)
</pre>
</li>
<li><b>bad argument #<n> to '<name>' (<type1> expected, got <type2>)</b><br />This is caused when a function '<name>' expects value with type <type1> for n-th argument, but user passed something with type <type2> instead.<br />
<pre>-- io.open expects string for the 1st argument
local file = io.open(io.open) -- bad argument #1 to 'open' (string expected, got function)
-- tonumber 2nd argument expects number if present
tonumber("0xFF") -- works
tonumber("0xFF", table) -- bad argument #2 to 'tonumber' (number expected, got table)
</pre>
</li>
<li><b>table index is nil</b><br />To be honest, this is most undescriptive Lua error message. What it means that you try to assign a value to a table at index "nil" (I mean literal nil).<br />
<pre>table[nil] = io -- table index is nil
</pre>
</li>
<li><b>bad argument #<n> to '<name>' (invalid option '<something>')</b><br />This means you passed invalid option. Notable function that throw this is 'collectgarbage' and 'file:seek'.<br />
<pre>print(collectgarbage("count")) -- works
collectgarbage("asd") -- bad argument #1 to 'collectgarbage' (invalid option 'asd')
</pre>
</li>
</ol>
So I think that covers most common Lua errors that you mostly encounter. In case you need help, please provide the Lua error message, plus the traceback if available. The traceback is also easy to read, the syntax is similar to above, and it definely helps.<br />
<br />
... unless you got a very rare "PANIC" error which is unhelpful. No, really.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-34026186545998829582019-10-21T23:29:00.001+08:002019-10-22T18:40:47.456+08:00Enabling Private DNS for Modified-Android that Lack Such Settings?One of the best features in Android 9 is Private DNS feature which allows DNS request not to be modified in any way by third-party or even by your ISP (this is always in Indonesia huh). Basically it does encrypt your DNS request and sign it so no one (except destination) can see it. And even if they see it, they can't modify it because it's signed.<br />
<br />
Enough for that, some phones running Pie unfortunately lack such features. Some OSes like MIUI actually just hid them, so invoking "am start com.android.settings/.Settings\$NetworkDashboardSetting" will show them. However, something like ColorOS completely removed it from their settings. So we shouldn't rely on that MIUI method.<br />
<br />
Now my idea is "What if we set those options from ADB instead?". It took me some research and ADB shell + grep with my phone and here's what I found.<br />
<br />
<br />
<pre>$ settings list global | grep dns
private_dns_mode=hostname
private_dns_specifier=1dot1dot1dot1.cloudflare-dns.com
</pre>
<br />
It looks clear that we can simply set those values from ADB. The shell does have access to modify those settings, at least in my phone (Mi A1 as of writing). So here are the possible combination of setting.<br />
<br />
<br />
<pre>$ settings put global private_dns_specifier resolver_hostname
$ settings put global private_dns_mode off|opportunistic|hostname
</pre>
<br />
Change "resolver_hostname" to something like "1dot1dot1dot1.cloudflare-dns.com" and see if it works. Note that the Private DNS hostname setting only works if "private_dns_specifier" is set to "hostname". If your phone stops connecting to internet (can't resolve any hostname), that means you messed up the "private_dns_specifier". Double check and try again.<br />
<br />
Note that this method works in my phone with ability to set those options in the UI too, so it would be good if someone can test this in phones that running Android 9 but lack that option in their settings UI.<br />
<br />
Update: If you get something like "Neither user 2000 nor current process has android.permission.WRITE_SECURE_SETTINGS" that means the OS customizations <a href="https://forum.xda-developers.com/find-X/help/enable-writesecuresettings-app-adb-t3855596" target="_blank">enforce some additional protection</a>. You may (or may not) able to disable those settings in developer options window too and try again. Thanks to my friend for testing this in Realme 3 Pro, the feature actually work as intended.<br />
<br />Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-90306153242857049872019-06-30T12:46:00.002+08:002019-06-30T12:46:43.006+08:00MediaFoundation decoder for LOVE & decoding in-memory.Windows 7 has its own COM-based API to aid decoding variety of audio formats, and if you concerned about patents (AAC, why), don’t worry as it’s a licensed decoder. This blog post is about how I wrote LOVE decoder that uses MediaFoundation.<br />
<h2>
<a href="https://www.blogger.com/null" id="Basics_2"></a>Basics</h2>
First thing to know is how LOVE decoder class looks like.<br />
<pre><code class="language-cpp"><span style="color: #859900;">class</span> Decoder : <span style="color: #859900;">public</span> Object
{
<span style="color: #859900;">public</span>:
<span style="color: #859900;">static</span> love::Type type;
Decoder(Data *data, <span style="color: #859900;">int</span> bufferSize);
<span style="color: #859900;">virtual</span> ~Decoder();
<span style="color: #859900;">static</span> <span style="color: #859900;">const</span> <span style="color: #859900;">int</span> DEFAULT_BUFFER_SIZE = <span style="color: #2aa198;">16384</span>;
<span style="color: #859900;">static</span> <span style="color: #859900;">const</span> <span style="color: #859900;">int</span> DEFAULT_SAMPLE_RATE = <span style="color: #2aa198;">44100</span>;
<span style="color: #859900;">static</span> <span style="color: #859900;">const</span> <span style="color: #859900;">int</span> DEFAULT_CHANNELS = <span style="color: #2aa198;">2</span>;
<span style="color: #859900;">static</span> <span style="color: #859900;">const</span> <span style="color: #859900;">int</span> DEFAULT_BIT_DEPTH = <span style="color: #2aa198;">16</span>;
<span style="color: #859900;">virtual</span> Decoder *<span style="color: #859900;">clone</span>() = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">int</span> <span style="color: #859900;">decode</span>() = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">int</span> <span style="color: #859900;">getSize</span>() <span style="color: #859900;">const</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">void</span> *<span style="color: #859900;">getBuffer</span>() <span style="color: #859900;">const</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">seek</span>(<span style="color: #859900;">double</span> s) = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">rewind</span>() = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">isSeekable</span>() = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">isFinished</span>();
<span style="color: #859900;">virtual</span> <span style="color: #859900;">int</span> <span style="color: #859900;">getChannelCount</span>() <span style="color: #859900;">const</span> = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">int</span> <span style="color: #859900;">getBitDepth</span>() <span style="color: #859900;">const</span> = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">int</span> <span style="color: #859900;">getSampleRate</span>() <span style="color: #859900;">const</span>;
<span style="color: #859900;">virtual</span> <span style="color: #859900;">double</span> <span style="color: #859900;">getDuration</span>() = <span style="color: #2aa198;">0</span>;
<span style="color: #859900;">protected</span>:
StrongRef<Data> data;
<span style="color: #859900;">int</span> bufferSize;
<span style="color: #859900;">int</span> sampleRate;
<span style="color: #859900;">void</span> *buffer;
<span style="color: #859900;">bool</span> eof;
};
</code></pre>
Anything that ends with <code>= 0;</code> means pure virtual method which we must implement in our derived class. Now let’s derive the <code>Decoder</code> class.<br />
<h2>
<a href="https://www.blogger.com/null" id="MFDecoder_Class_38"></a>MFDecoder Class</h2>
<pre><code class="language-cpp"><span style="color: #859900;">class</span> MFDecoder: <span style="color: #859900;">public</span> Decoder
{
<span style="color: #859900;">public</span>:
MFDecoder(Data *data, <span style="color: #859900;">int</span> bufferSize);
<span style="color: #859900;">virtual</span> ~MFDecoder();
<span style="color: #859900;">static</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">accepts</span>(<span style="color: #859900;">const</span> <span style="color: #268bd2;">std</span>::<span style="color: #268bd2;">string</span> &ext);
<span style="color: #859900;">static</span> <span style="color: #859900;">void</span> <span style="color: #859900;">quit</span>();
Decoder *<span style="color: #859900;">clone</span>();
<span style="color: #859900;">int</span> <span style="color: #859900;">decode</span>();
<span style="color: #859900;">bool</span> <span style="color: #859900;">seek</span>(<span style="color: #859900;">double</span> s);
<span style="color: #859900;">bool</span> <span style="color: #859900;">rewind</span>();
<span style="color: #859900;">bool</span> <span style="color: #859900;">isSeekable</span>();
<span style="color: #859900;">int</span> <span style="color: #859900;">getChannelCount</span>() <span style="color: #859900;">const</span>;
<span style="color: #859900;">int</span> <span style="color: #859900;">getBitDepth</span>() <span style="color: #859900;">const</span>;
<span style="color: #859900;">double</span> <span style="color: #859900;">getDuration</span>();
<span style="color: #859900;">private</span>:
<span style="color: #859900;">static</span> <span style="color: #859900;">bool</span> <span style="color: #859900;">initialize</span>();
<span style="color: #859900;">static</span> <span style="color: #859900;">void</span> *initData;
<span style="color: #586e75;">// non-exposed datatype to prevent cluttering LOVE</span>
<span style="color: #586e75;">// includes with Windows.h</span>
<span style="color: #859900;">void</span> *mfData;
<span style="color: #586e75;">// channel count</span>
<span style="color: #859900;">int</span> channels;
<span style="color: #586e75;">// temporary buffer</span>
<span style="color: #268bd2;">std</span>::<span style="color: #268bd2;">vector</span><<span style="color: #859900;">char</span>> tempBuffer;
<span style="color: #586e75;">// amount of temporary PCM buffer</span>
<span style="color: #859900;">int</span> tempBufferCount;
<span style="color: #586e75;">// byte depth</span>
<span style="color: #859900;">int</span> byteDepth;
<span style="color: #586e75;">// duration</span>
<span style="color: #859900;">double</span> duration;
};
</code></pre>
There are few points that must be noted here.<br />
<ul>
<li>MFDecoder contstructor receives LOVE <code>Data</code> object, which is data located in block of memory, and the desired buffer size.</li>
<li>MediaFoundation API can return any number of samples so we use temporary buffer to contain leftovers after decoding.</li>
<li>You may notice that <code>mfData</code> is <code>void*</code>. The reason to do this is that there’s problem compiling LOVE if <code>Windows.h</code> is included BEFORE the keyboard module. We also don’t want to clutter the includes with Windows-specific includes and drag down the compilation time.</li>
<li>The <code>tempBuffer</code> is <code>std::vector</code>. Yes this is intentional so we don’t have to manage the allocated memory and take advantage of RAII. It also helps in case MediaFoundation returns data bigger than provided buffer by simply reallocating bigger temporary buffer.</li>
<li>Then there’s <code>initData</code> member. This is set at <code>initialize</code> function above it, which is called when new MediaFoundation decoder is created or if the compatible extensions are being checked.</li>
</ul>
<h2>
<a href="https://www.blogger.com/null" id="Problems_82"></a>Problems</h2>
Now there are some problems.<br />
<ul>
<li>
MediaFoundation doesn’t officially support loading media from memory.<br />
MediaFoundation assume you have the media file reside somewhere in local filesystem or in network, probably due to their DRM nature. There’s <a href="https://blogorama.nerdworks.in/playinginmemoryaudiostreamsonw/">this blog post</a>, but it only works on Windows 8 and using C++/CLI which has its own 2 problems:<br />
<ol>
<li>We can’t use C++/CLI when compiling LOVE, that would reduce our compilation times and increase the bloat which we want to try to minimize as possible.</li>
<li>My target is to make the decoder available for Windows 7 and later. <code>IRandomAccessStream</code> and the function it uses are Windows 8 and later.</li>
</ol>
Then I found out there’s <a href="https://docs.microsoft.com/en-us/windows/desktop/api/mfidl/nf-mfidl-mfcreatemfbytestreamonstream"><code>MFCreateMFByteStreamOnStream</code></a> which accepts <code>IStream</code> interface. <code>IStream</code> interface is available since Windows 2000, which then <a href="https://docs.microsoft.com/en-us/windows/desktop/api/shlwapi/nf-shlwapi-shcreatememstream"><code>SHCreateMemStream</code></a> can be used to create one.<br />
</li>
<li>
The functions that I use requires linking to <code>Mfplat.dll</code> and <code>Mfreadwrite.dll</code>.<br />
The latter is only available in Windows 7. Since I want to make sure it runs in Windows Vista too (without the MediaFoundation decoding capabilities of course), I have to dynamically load it, hence the <code>MFDecoder::initialize()</code> static function.<br />
</li>
<li>
Once <code>IMFByteStream</code> is created, you have to set the MIME.<br />
It’s done by casting the <code>IMFByteStream</code> to <code>IMFAttributes</code> and set the MIME. Unfortunately, as of LOVE commit <a href="https://bitbucket.org/rude/love/commits/ccf9e63cf0f1">ccf9e63</a>, the decoder constructor no longer receives the audio extensions, so we have to test for every possible supported MIME types. Fortunately, Microsoft gives us <a href="https://docs.microsoft.com/en-us/windows/desktop/medfound/supported-media-formats-in-media-foundation">list of supported media formats</a> in their documentation, so I just get the MIME string from IIS MIME Types.<br />
</li>
</ul>
After those problems is resolved, it’s only matter of setting the properties of the <code>IMFSourceReader</code> like creating decoder which outputs PCM.<br />
<h2>
<a href="https://www.blogger.com/null" id="Seeking_97"></a>Seeking</h2>
Yes, the <code>IMFSourceReader</code> supports seeking, but there’s no guarantee that it will be accurate. The function you’re looking for is <a href="https://docs.microsoft.com/en-us/windows/desktop/api/mfreadwrite/nf-mfreadwrite-imfsourcereader-setcurrentposition"><code>IMFSourceReader::SetCurrentPosition</code></a> which accepts time as 100-nanoseconds (second to 100-nanosecond is multiply by 1e+7).<br />
<h2>
<a href="https://www.blogger.com/null" id="Another_Problem_GUID_100"></a>Another Problem: GUID</h2>
I’m getting linker errors when compiling LOVE with the MediaFoundation decoder as it complains about unresolved GUID. I also don’t want to link to any of MediaFoundation DLLs so it’s binary compatible with Windows Vista (XP support is dropped as of LOVE 11.0). Temporary workaround to fix this is to copy the GUID declaration in header files to <code>const GUID</code> variables.<br />
<h2>
<a href="https://www.blogger.com/null" id="Aftermath_103"></a>Aftermath</h2>
After I get everything running, now I have LOVE build which can loads AAC and WMA using MediaFoundation, how good is that, huh? You can check the full source code in <a href="https://github.com/MikuAuahDark/livesim3-love/blob/master/src/modules/sound/lullaby/MediaFoundationDecoder.cpp">here</a>. The respective header lies in same directory as the C++ file.<br />
<br />
Now you may ask, what about MinGW? Well, unfortunately LOVE doesn’t support being compiled under MinGW in the first place, so compiling LOVE under Windows is only supported using MSVC compiler.<br />
<br />
And if anyone wants to decode audio from memory using MediaFoundation, then this blog post is what he/she’s looking for.<br />
<br />
<span style="color: white;">Post is written in Markdown first then converted to HTML lol.</span> Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-2841370821597213042019-06-05T18:09:00.004+08:002019-06-05T18:09:49.370+08:00Fixing stack overflow on older game by limiting exposed OpenGL extensions.There's this GameHouse game, called AirStrike 3D. This game itself is released back in very old days, not accounting for the hardware development and new GPUs and new OpenGL extensions. Things were mostly fixed-size buffers back then. Until at one point when I decided to install it back, I can't run this game.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-bjZRpmNCm-o/XPeQc1JAm2I/AAAAAAAAAcE/tRJ6LW_7n3g9pEwFjiuBEcrwl29z1JdwQCLcBGAs/s1600/temporary.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="627" data-original-width="802" height="250" src="https://1.bp.blogspot.com/-bjZRpmNCm-o/XPeQc1JAm2I/AAAAAAAAAcE/tRJ6LW_7n3g9pEwFjiuBEcrwl29z1JdwQCLcBGAs/s320/temporary.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Trying to run the game simply crashes. I run the game in Windowed mode by editing their config.ini</td></tr>
</tbody></table>
At first I thought this was Windows 10 problem since running the game in VirtualBox with Windows XP seems fine. Until I have an idea to run the game with Mesa3D instead, but still crashes. I decided to start Visual Studio debugger and the crash point to some random location and access violation about can't execute piece of code of RAM (due to the permissions), so I thought "this is probably stack overflow" so I decided to check the stack register and what a surprise: I see all the OpenGL extensions string in the stack register, so this is caused because the extension string returned by my OpenGL driver is simply too long for the game to handle. Also my laptop is dual-GPU but both GPUs can't run the game because the extension string too long.<br />
<br />
After bit of search, I found <a href="https://www.mesa3d.org/application-issues.html" target="_blank">this Mesa3D page</a> which describe how to limit the OpenGL extension string returned to workaround some game. It work great but I don't want to use Mesa3D because my laptop is not an AMD Threadripper which has dozen of threads, so I decide to roll my own OpenGL32 which forwards all OpenGL calls to Windows OpenGL32.dll but intercepts glGetString(GL_EXTENSIONS) call and limit the extension string returned.<br />
<br />
First, I take a look on Mesa3D source code on their extension table list which I can use and I found this <a href="https://github.com/mesa3d/mesa/blob/45ca7798dc32c1cb7da8f94af9a7d7400ee9bc12/src/mesa/main/extensions_table.h" target="_blank">header file</a> which is exactly what I'm looking for. Then I realize that Windows OpenGL32.dll has 360 functions and I don't want to write the forwarding functions by hand, who wants to do that. So instead of writing them by hand, I used my Lua programming power to parse the gl.h and WinGDI.h header file to create a file which forwards the GL function to original Windows OpenGL functions. Fixing any calling conventions and making sure the <a href="https://stackoverflow.com/questions/38710243/dll-using-stdcall-without-name-decoration-why-does-it-even-work" target="_blank">result function names aren't somewhat decorated</a> by generating def file too, I finally have working program.<br />
<br />
After putting the new DLL to the game folder and setting the necessary environment variable, I'm surprised the game finally runs. The game runs at about 500FPS in my HD Graphics 620 (got CPU bottleneck), but this game is fixed-function pipeline so I'm not surprised about the absurdly high FPS.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-CTg5uKfP-g0/XPeTBbNiWLI/AAAAAAAAAcc/CMv9Af2CrBgMbxWkKu1YSv0h8gi3APEvQCLcBGAs/s1600/temporary2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="627" data-original-width="802" height="250" src="https://1.bp.blogspot.com/-CTg5uKfP-g0/XPeTBbNiWLI/AAAAAAAAAcc/CMv9Af2CrBgMbxWkKu1YSv0h8gi3APEvQCLcBGAs/s320/temporary2.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Game runs at ~540FPS. For anyone curious, here's my <a href="https://pastebin.com/fLXDveTY">astrike.log</a>.</td></tr>
</tbody></table>
The source code of the hooked OpenGL32.dll that I'm talking about is available in my <a href="https://github.com/MikuAuahDark/gl-ext-limit">GitHub</a> including the build instructions (it's CMake) and if you too bother to compile and only want the 32-bit OpenGL32.dll, just go to the releases folder. One thing that I notice that it only able to handle at most 4048 string length or the stack overflow occur, so setting the extensions year to 2009 or earlier should work.<br />
<br />
If some older game have same issue but you don't want to use Mesa3D, you can try my hooked OpenGL32.dll above and tell me how it performs.<br />
<br />Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com5tag:blogger.com,1999:blog-880306883673298649.post-33666522624184305452019-04-30T13:52:00.001+08:002021-06-13T00:59:33.346+08:00Fixing black border in your game image with ImageMagick.<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://cdn.discordapp.com/attachments/454274817236140033/572656819852541983/unknown.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="630" data-original-width="800" height="252" src="https://cdn.discordapp.com/attachments/454274817236140033/572656819852541983/unknown.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Star with unintended black border</td></tr>
</tbody></table>
What's with that black around the star? That's because the image has "transparent black" (or in CSS notation, <code>rgba(0, 0, 0, 0)</code>) for all the fully transarent image and white with alpha otherwise. This is not a problem for image editing software like Photoshop but this is a problem for OpenGL especially if you use linear interpolation to resize your image.<br />
<br />
What actually happend? If linear interpolation is enabled, the GPU will sample around the white and the "transparent black", thus will result in gray color with alpha around 0.5. This is not what you actually want as this can give your image an unintended black border, which may or may not bad for your game.<br />
<br />
A solution for this is to modify your image to have "transparent white" instead. Based on this <a href="https://stackoverflow.com/a/55658717" target="_blank">answer</a>, assume you have ImageMagick version 7 or later, I come up with this command:<br />
<br />
<pre>magick convert input.png -channel RGB xc:"#ffffff" -clut output.png</pre>
<br />
And here's the result.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://cdn.discordapp.com/attachments/454274817236140033/572659565297139712/unknown.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="630" data-original-width="800" height="252" src="https://cdn.discordapp.com/attachments/454274817236140033/572659565297139712/unknown.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Star without black border around it.</td></tr>
</tbody></table><p>
Now what happends here is that we replace all color channel to 255 (thus result in white), but we keep the alpha values intact. The GPU then will see all the colors as white but only varying in alpha, so it will only interpolate the alpha because all the colors are white.<br />
<br />
And if you plan to pass your image to <a href="https://github.com/google/zopfli" target="_blank">zopflipng</a>, make sure not to pass <code>--lossy_transparent</code> as that option changes all completely transparent pixel to "black transparent" again, which is the source of the problem.</p><p>UPDATE: ImageMagick command above won't work for images with various colors. I forked <a href="https://github.com/urraka/alpha-bleeding" rel="nofollow" target="_blank">alpha-bleeding</a> program which uses LodePNG to ease MSVC compilation which can be found here: <a href="https://github.com/MikuAuahDark/alpha-bleeding" target="_blank">https://github.com/MikuAuahDark/alpha-bleeding</a>. <br /></p>Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-59276434541134948592019-04-25T12:56:00.001+08:002019-04-25T12:56:50.910+08:00VS2013 RTM cl.exe and "Use Unicode UTF-8 for worldwide language support"There's a feature in Windows 10 that lets you specify UTF-8 string to C function <code>fopen</code> and other ANSI WinAPI functions. This makes it feel Unix-like where fopen in Unix expects it to be UTF-8 filename. However this doesn't mean everything works as expected as Microsoft warns us about that feature which may break application where it assume multi-byte length is 2 bytes max. And unfortunately this is true for VS2013 RTM <code>cl.exe</code> compiler.<br />
<br />
<pre>C:\Users\MikuAuahDark>cl.exe test.c
Microsoft (R) C/C++ Optimizing Compiler Version 18.00.21005.1 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
test.c
test.c : fatal error C1001: An internal error has occurred in the compiler.
(compiler file 'f:\dd\vctools\compiler\cxxfe\sl\p1\c\p0io.c', line 2812)
To work around this problem, try simplifying or changing the program near the locations listed above.
Please choose the Technical Support command on the Visual C++
Help menu, or open the Technical Support help file for more information
</pre>
<br />
The file <code>test.c</code> is an empty file, but that error shows up regardless of input file you specify. What happend here?<br />
<br />
Turns out, there's check in <code>c1.dll</code> which basically equivalent to this C code<br />
<br />
<pre>CPINFO cpInfo;
UINT chcp = GetACP();
GetCPInfo(chcp, &cpInfo);
if (cpInfo.MaxCharSize > 2) internal_error("f:\dd\vctools\compiler\cxxfe\sl\p1\c\p0io.c", 2812);</pre>
<pre> </pre>
It assume the max multi-byte size is 2 bytes max, but in this case, I enabled a feature called "Use Unicode UTF-8 for worldwide language support", thus these what happends:<br />
<ol>
<li>GetACP returns 65001</li>
<li>GetCPInfo returns information about UTF-8 code page, where max char size is 4.</li>
</ol>
Is there any workaround for this? I'm afraid there's no way. Basically we must make sure cl.exe didn't see 65001 as codepage, otherwise there's explicit check for it. There's Locale Emulator but that only emulates locale string, not code page.<br />
<br />
If anyone found how to update VS2013 in 2019, please comment below. Yes, using VS2013 is mandatory in my case because I need to ensure compatibility with Windows Vista where it must target lower than Windows 7 SP1.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-37162833219850376692019-01-25T23:49:00.000+08:002019-01-25T23:49:00.836+08:00Re-Implementing Live2D runtime in LÖVE: Performance OptimizationPlease see my <a href="/2019/01/re-implementing-live2d-runtime-in-love.html" target="_blank">previous blog post</a> for more information. Do you think I'm really satisfied with 1.2ms performance? No. I think I can do more. Note that when I wrote a time measurement, that means it's time taken to update Kasumi casual summer model (shown below).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<div style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" data-original-height="559" data-original-width="800" height="223" src="https://cdn.discordapp.com/attachments/462976834842132481/538379583624314921/unknown.png" title="Kasumeme Toyama" width="320" /></div></div>
<br />
<h3>
Code Optimization</h3>
I see that after my previous blog post, there's many optimization that can be done. I started it by reducing the temporary table creation. Instead of creating new table at function body, I created new table at file scope and reuse that table over and over. In the motion manager code, I used<a href="https://gist.github.com/Positive07/adc1ed058a9ae350df559e2b100949dc#file-batch-lua-L83L107" target="_blank"> this variant</a> of algorithm to remove motion data when necessary. I also localize function that's called every frame, mostly functions from <code>math</code> namespace like <code>math.min</code>, <code>math.floor</code>, and <code>math.max</code>. I also do cache more variable if that variable is used multiple times to reduce table lookup overhead.<br />
<br />
Although the optimization I listed above doesn't really save significant amount of time when JIT is used, it's somewhat significant optimization for non-JIT codepath. Next optimization is by converting hair physics code to use FFI datatype for JIT codepath, and class otherwise. Testing gives better performance, 1.17ms. Not much but it's better than nothing.<br />
<br />
Problem arise, when I inspect the verbose trace compiler output, I noticed lots of "NYI: register coalescing too complex" trace abort in the curved surface deformer algorithm, which indicate I'm using too many local variable there. At first this was bit hard to solve, but I managed to optimize it by analyzing the interpolation calculation done by curved surface deformer algorithm. Then it solve the trace aborts entirely. Testing gives slightly better performance, 1.15ms.<br />
<br />
<h3>
Rendering Optimization</h3>
The last optimization I done is the <a href="https://love2d.org/wiki/Mesh" target="_blank">Mesh</a> optimization. Since I copied Live2LOVE Mesh rendering codepath as-is, it's actually uploading lots of duplicate data to the GPU, duplicating the vertices based on vertex map manually in CPU side because I thought the vertex map can change. This can be very slow for the non-JIT codepath because the amount of data needs to be send in <a href="https://love2d.org/wiki/Mesh:setVertices" target="_blank">Mesh:setVertices</a> can be too much. As a reference, before doing this optimization, the non-JIT codepath (LuaJIT interpreter) took 6ms.<br />
<br />
After having better overview how Live2D rendering works, I'm safe to assume vertex map won't ever change, so I start by reducing amount of vertices that needs to be uploaded to GPU and send the vertex map. This gives more significant performance boost in CPU-side actually. The JIT codepath now runs at 1.05ms, it's very very close to Live2LOVE 1ms. Interpreter (LuaJIT) took 4ms, yes 4ms to update the model. Unfortunately, vanilla Lua 5.1 took as long as 12ms to update the model.<br />
<br />
The non-JIT codepath is forced to use table variant of <a href="https://love2d.org/wiki/Mesh:setVertices" target="_blank">Mesh:setVertices</a> because the overhead of FFI is higher than the benefit of using Data variant. Also the non-JIT codepath can't assume FFI is available at all. LuaJIT can be compiled without FFI support (but who wants to do this?) or it maybe run in vanilla Lua 5.1. One of my goal for this project is to provide maximum compatibility with Lua 5.1 too, despite LÖVE is compiled using LuaJIT by default.<br />
<br />
<h3>
Experimental Rendering Codepath</h3>
Unfortunately I have to throw away the mesh batching technique I mentioned in my previous blog post. This mesh batching technique causes very significant slowdown both in JIT and non-JIT codepath with very little performance improvement in GPU, so I decide to abandon this and use the old approach of updating models, drawing Mesh one by one. You can see at screenshot below that the model took 166 drawcall<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<div style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" data-original-height="559" data-original-width="800" height="223" src="https://cdn.discordapp.com/attachments/462976834842132481/538379583624314921/unknown.png" title="Kasumeme Toyama" width="320" /></div></div>
and additional drawcall caused by <a href="https://github.com/slages/love-imgui" target="_blank">IMGUI</a>.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-5568970705726763572019-01-16T23:34:00.002+08:002019-01-16T23:39:29.073+08:00Re-Implementing Live2D runtime in LÖVE<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/MZewv9Qhuuk/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/MZewv9Qhuuk?feature=player_embedded" width="320"></iframe></div>
<div style="text-align: center;">
(video above shows my implementation in action using LÖVE framework)</div>
<br />
Live2D is a nice thing, the fluid character movement gives additional touch to the game which uses it. For my personal use, however it has annoying limitation: Lua is not officially supported, especially LÖVE. Well, there's 2 ways to overcome this.<br />
<br />
<h3>
Writing external Lua C module</h3>
This is probably the simplest way (but not that easy). Link with Live2D C++ runtime library files, add code which interact with Lua (and LÖVE), then you got <a href="https://github.com/MikuAuahDark/Live2LOVE" target="_blank">Live2LOVE</a>. This module is actually very fast, considering the Lua to C API overhead, and works by letting Live2D to do the model transformation and LÖVE to do the model rendering. Since it uses official Live2D runtime, it has these limitations:<br />
<ul>
<li>Must link with OpenGL. This is not a problem since LÖVE uses OpenGL already.</li>
<li>VS2017 is not supported (you have to use Cubism 3 for that). However it supports down to VS2010, but LÖVE requires VS2013, so this is not really a problem unless you compile LÖVE to use VS2017 runtime.</li>
<li>MinGW/Cygwin compilation is not supported. Not really a problem since compiling LÖVE in Windows using MinGW/Cygwin itself is not supported.</li>
<li>Linux and macOS is not supported. This is the real problem. Not all people use Windows to run LÖVE.</li>
</ul>
So another idea that comes to my mind is:<br />
<br />
<h3>
Re-Implement Live2D Cubism 2 Runtime</h3>
In Lua, because why not. This is actually somewhat time-consuming process and took me more than 3 weeks to have model rendering working as intended. My additional goal for this is to have Live2LOVE-compatible interface too, so switching between implementation is simply changing the "require" module. From now on, I'll refer <b>Live2D Runtime</b> as <b>Live2D Cubism 2 Runtime</b>.<br />
<br />
I start by downloading Live2D Runtime for WebGL (Javascript) and beautify the code (since it's in .min.js file). As expected, the function method names are obfuscated. So, I unpacked Android version of Live2D C++ Runtime and deobfuscate the method name by matching the arguments from Live2D Runtime C++ header files and with help of IDA pseudocode decompiler to compare the implementation with the Javascript ones. This whole process took 2 weeks.<br />
<br />
Then I start by writing the Lua equivalent code based on Javascript Live2D Runtime. This is the easiest to do since Javascript is also dynamically typed. Carefully translating 0-based indexing code to 1-based. Then fixing bugs, writing Live2LOVE-compatible interface so I can use existing Live2D viewer code that using Live2LOVE, and testing.<br />
<br />
After a week, I got model rendering, motion, and expression working. Using existing code from my LÖVE-based Live2D model viewer to use my own implementation instead of Live2LOVE. Then the next problem comes: it's 4x slower than Live2LOVE. Live2LOVE took 1ms to render Kasumi (casual summer clothes) while my implementation took 4ms to render same model. I already code the implementation carefully so that LuaJIT happily accepts my code and won't bail out to interpreter as possible.<br />
<br />
I started optimization by using "Data" objects instead of plain table when updating the "Mesh" object for drawing. This cuts down the update time significantly from 4ms to 1.7ms so using table to update the "Mesh" object is always a bad idea. Someone in LÖVE Discord then says "try to use all FFI instead of plain table". At first I did not agree with him because I want to preserve compatibility with mobile, but then I decide to proceed by having falling back to tables in case FFI is not suitable (JIT is off, FFI support is not enabled, or using vanilla Lua). I swapped most types from plain table to FFI objects and I can get as low as 1.2ms, almost close to Live2LOVE 1ms.<br />
<br />
<h3>
Conclusion</h3>
Re-implementing Live2D Runtime is a nice experience. It gives me better overview when to start optimizing code instead of optimizing early and overview of how Live2D model transformation works. Apparently it can't beat C++ version of the official Live2D Runtime in terms of model updating, but I think it can beat it in terms of model rendering. I'm thinking of "Mesh" batching technique, which is basically: accumulate vertices to render then draw'em'all at once if flush requested. I'm currently satisfied with the current result, but I think I still can do better<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://bandori.party/card/1081/Tae-Hanazono-Power-Boasting-Talent/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" data-original-height="800" data-original-width="800" height="200" src="https://i.bandori.party/u/c/transparent/1081Tae-Hanazono-Power-G1T7zf.png" title="Hopefully she isn't worshipping Big Chungus" width="200" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
... and hope to God it can success without problems.Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-60092372065561762692018-12-01T21:43:00.000+08:002018-12-01T21:43:04.753+08:00San Andreas "Modern" Modding Tools & Some RantWhen I search for some tutorials in Google about modding GTA San Andreas, it's mostly still uses very old tool. Usually it's Indonesian modding community that always uses old modding tools for "compatibility" reason, but in fact old tools were unstable, and of course newer one means more stable (although may need modern system). For todays blog post, I just want to tell you that there are better tools out there that you can use as replacement. Note that this blog post can be considered as rant, depending on your perspective.<br />
<br />
First, IMG editor. This is essential tool for modifying the game models and textures. Usually, the blog post tells you to use "IMG Tool v2.0". Excuse me, there's a way better tool for this, and it's Alci's IMG Editor. Compared to IMG Tool v2.0, Alci's IMG Editor is way more superior. Having bigger windows, import/export multiple files, and most importantly, it can create new IMG file from scratch, unlike IMG Tool, where you only restricted to existing IMG file.<br />
<br />
Second, TXD tool. This is must-have tool if you plan to modify texture files (creating your own paintjob). Usually, the blog post tells you to use "TXD Workshop", but excuse me, TXD Workshop is somewhat very limited, and last time I tired the 15 years edition, the mipmap generation is buggy. If you interested how it buggy, my image is 4096x4096, compressed to DXT1, then generate 12 levels of mipmaps. It took around 3 minutes in my i5 7th gen laptop, and the result, it fails to generate mipmap for odd-levels (3, 5, 7, ...). This result in "black shade" when the game render the texture. So let me introduce you to <a href="https://github.com/quiret/magic-txd" target="_blank">Magic.TXD</a>, modern version of TXD Workshop, easy to use, and it generates mipmap faster and not buggy of course. It also comes with localization popular among the SA modders <strike>including Indonesian</strike>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://cdn.discordapp.com/attachments/497996996251090944/518291829796503553/Screenshot_570.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="356" data-original-width="743" height="153" src="https://cdn.discordapp.com/attachments/497996996251090944/518291829796503553/Screenshot_570.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The texture created with TXD Workshop + mipmap. Level 3, 5, 7, and 11 is buggy and game will render that to black (notice shade of black near the headlights). That's why it looks dark for some reason.</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://cdn.discordapp.com/attachments/497996996251090944/518414902080176138/unknown.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="303" data-original-width="565" height="171" src="https://cdn.discordapp.com/attachments/497996996251090944/518414902080176138/unknown.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The texture create with Magic.TXD. Now everything looks nice and no more black shade.</td></tr>
</tbody></table>
<br />
For this one, it's not really modding tool, but I want to put it anyway. If you're script modder, then you may want to consider <a href="https://gtaforums.com/topic/890987-moonloader/" target="_blank">MoonLoader</a>, which is "modern version" of CLEO. Do you know that CLEO scrips are actually slow compared to Lua script? or, you hate CLEO scripts and prefer writing in different language? Then you got MoonLoader. One most notable feature is the error handling. CLEO script errors? Goodbye to current game session. MoonLoader script errors? Error is logged then script execution stopped.<br />
<br />
As a side note, if you only want to dump TXD texture to PNG or TGA, FFmpeg supports TXD file and can transcode it to supported image formats (but you can't create TXD sadly).<br />
<br />Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0tag:blogger.com,1999:blog-880306883673298649.post-68283362596079800972018-10-28T11:42:00.000+08:002018-10-28T11:42:00.969+08:00Music quality is associated with file size. True?People often say that "if the audio size is big, that means the quality is nice". Actually, not really. Let me tell you one true story.<br />
<br />
One day when I was surfing with my Facebook, there's Romeo and Cinderella movie video which was performed by <a href="http://bandori.wikia.com/wiki/Romeo_and_Cinderella" target="_blank">Poppin'Party</a>. It was part of BanG Dream x VOCALOID collaboration, so yeah it's normal, except the movie video uses the full version of the song. With my element inspection & dev console magic, I managed to get the audio file (it's not really legal, forgive me). The audio is encoded using AAC-HE encoding, with 48Kbps bitrate and frequency at 44100/22050Hz. The resulting file size is ~1.6MB. If you think the quality is bad, keep reading the story below.<br />
<br />
A few days later, one of my friend managed to get 320Kbps MP3 version of the song, and posted a screenshot about it. The screenshot display full artist name, title, and even cover art. However, as you may know, those can be easily edited with many free software (FFmpeg allows you to add artist name and title, and the cover art can be easily obtained from clickable "Poppin'Party" above). When I see his post, I was skeptical about it. so I managed to ask if he can send me the audio file for analysis purpose, in exchange with the AAC-HE version that I have. He lend me the MP3 version, which has size around ~10MB (which is normal for 320Kbps MP3), and I also lend him my AAC-HE version, which is ~1.6MB (almost 10x smaller). He can't tell the difference between his MP3 version and my AAC-HE version sadly, but he only says that mine has lower audio volume (which is not really related).<br />
<br />
Now, it's time to put it to FL Studio's "Wave Candy" plugin. First, I set up FL Studio to listen to Stereo Mix (it's rare to find laptop/PC with this feature). First test is to use my AAC-HE version that I got from Facebook. The result? bit surprising. I already knew that AAC-HE has better quality for lower bitrate as low as 32Kbps. The AAC-HE version has ~16KHz fieldity (better band preservation). Then when I test the MP3 file, my guess is correct, the band preservation is only ~12KHZ, way lower than AAC-HE version that I got. I also can differentiate the audio quality with my ear alone, and my ear tells me that the MP3 audio quality is lower than the AAC-HE version.<br />
<br />
From that, we can conclude that smaller audio file size doesn't mean lower quality. It depends on encoding used and how lossy it was before re-encoded. Because, as you may know, re-encoding lossy audio degrades the quality even and even more. You also can't trust the file size alone because, say I re-encode that Romeo and Cinderella song (AAC-HE version) to WAV 44100Hz (which yields to ~40MB), I won't gain anything, and in fact I only waste drive space (it will only have ~16KHz fieldity). The quality is then exactly same as AAC-HE version, and if I re-encode the WAV to AAC-HE again, the quality won't be same as the first one.<br />
<br />
Then, back to the title of this post. <b>Music quality is associated with file size. True? </b>The answer is<b> FALSE.</b><br />
<br />Miku AuahDarkhttp://www.blogger.com/profile/13164913176362902839noreply@blogger.com0