A JavaScript Priority Ready Queue Library

I’ve just finished a draft version of my long-envisioned priority-ready-queue project (which I’m currently working on incorporating into Plurk).

It’s a super tiny JavaScript library, providing a two-level priority-based queue. I’m positioning it as a hacky/dirty alternative for managing dependency issues between DOMContentLoaded handlers (“alternative” to struggling with fully-fledged module systems).

However, it’s not tightly coupled to DOMContentLoaded. The “ready” signal has to be explicitly triggered by the library client. So, it doesn’t actually require jQuery either, even though I’m currently modeling implementation details after jQuery’s, for calling multiple ready handlers, and my own use cases is by and large with jQuery only (together with IE8 support).

Please check it out! And as always, issues and PRs and whatsoevers are welcome.

Read More

In the Making of Mnjul’s Intimate Home 4931 (IX)



  • 再做更多最佳化 jQuery selector,能少 query 就少 query,能 cache 就 cache。
  • 載入畫面跟背景音樂/雨聲的所有 loader 現在都改用 jQuery Deferred 來做 synchronization barrier,code 一整個乾淨很多。
  • 只用一次的 function 會用完就刪掉。
  • 盡量不用 jQuery 加字串建 DOM element,而直接用 document.createElement
  • 用了 ES5 的一些 array functions,像是 Array.prototype.forEach,來取代簡單的 for(;;)for(in)。主要還是覺得「語感」比較好,實際上的效能搞不好還變差咧。但我現在這份工作做了一陣子,也變得跟 timdream 一樣,提倡少花心思在 micro-optimization。
  • 減少了對大量 children 加同一個 class 或是 CSS 的情形,盡量把 class 放在 common ancestor,而用 CSS rule 讓所有 children 都有那個 style。特別是需要對每頁內容指定白/藍/紫冷色的時候。
  • 另外如果有太多 children 需要 bind 同一種 event,就 bind 到 common ancestor 上。也就是 jQuery.on() delegation。
  • 不知從何時開始的 webkit 版本開始可以對 webfont 吃 text-rendering: optimizeLegibility;,就用了。
  • 有一些本來用 jQuery 手動添加的文字改用 CSS :after:before
  • 我 CSS 內不合 syntax 的自定變數 改用 CSS variable 來定義。不過實際上還是用 JS 取代,只是這樣 CSS 本身可以符合 syntax。
  • CSS linear-gradient 的語法改用「新」語法。
  • 修了一個起風的時候雨會直接消失的 bug。
  • 修了漸進載入 Fragments 跟 Shards 的時候會不小心 re-serialize & re-parse HTML & jQuery object 的 bug。
  • 改用 animationend/transitionend event 來決定 code 要幹嘛,而不是用 setTimeout,或是用 jQuery animation 的 callback ( 後者也是因為我想要用 native CSS animation )。
  • 因為 -moz-user-select 的行為有變,我本來的 CSS rule 等於沒效。修了這個 bug 之後,現在有正確套用到所有的元素了。欸,這該不會跟 Firefox OS 2.1 跟 2.2 的 text-selection 有關係吧。
  • devicePixelRatio 現在沒有被我的 python minifier 給吃掉了。

說真的,因為平常工作只要針對 Gecko tune 就好,實際上自己的網站要相容各種瀏覽器,版本也不能太新,還真是不少挑戰。

Read More

In process of Mnjul’s Intimate Home 3811 Melody (VII)


而且上一篇還是四年多前了,畢竟這是 2011 年 release 的網站。不過我最近又改了一些有的沒的的東西;除了新內容和根據現實修正內容以外,還有以下的更新:

  • 把本來用的 Adobe 繁黑體 字型改成最近新的 思源黑體。因為我偷用的 Adobe 繁黑體 只有 B weight,但是 思源黑體 從 Extra Light 到 Heavy 一應俱全,我一口氣改用 Light ,重調 antialiasing 參數之後,整體美感好很多。也因為思源黑體的日文字和日文漢字和中文字還搭得起來,就刪掉小塚ゴシック。不過還是留著 Myriad Pro,因為它的英數字比較漂亮( 如果只有純英數字的上下文啦 )。另外也把 Myriad Pro 從 Regular weight 改成 Light weight,以及調整 antialiasing 參數。
  • Album 的縮圖標題,現在用<br />來手動換行,就不會有「最後一行只有一個字」的情況。
  • 到處修正了行距、段落句、對話框邊距的大小。
  • 把 Album 的資料改用 JSON 表示,而非用 XML。不但省了一些空間,因為本來各式各樣的 XML parser 就做得很模組化,所以 code 也沒改很多。

其實也想把整個站改成支援 HiDPI 解析度,不過要重生太多圖,就有機會再弄了。

Read More

Solving Coursera MalSoftware002 Bonus Quiz Without (Fully) Understanding It

Big disclaimer for Coursera staff related to MalSoftware002 course: If you think this article (which was made public after the course ended, NOT on 2014-May-24, which was when I solved the quiz) should not be public, please mail me at b94075 (a) csie (dot) ntu (dot) edu (dot) tw.

In spring 2014, I took the second offering of Malicious Software and its Underground Economy: Two Sides to Every Story at Coursera by Dr. Cavallaro at U of London. There was a bonus quiz which gave us a stripped, but not packed, x86 32bit ELF executable. The executable took arbitrary input string, and printed out some response based on the “secretive algorithm” operated against the input string. We were asked to figure out the “special input string” (i.e. the key/passphrase) which would give the “You got it right!”-like response. The bonus quiz was considered solved if we could upload the correct key to Coursera. As x86-assembly-savvy as I was, I quickly jumped on to the quiz and began attempting to solve it.

So, I got an executable, and naturally the second step was to disassemble it. I used IDA as instructed in the course, but in the end I would not use it to debug the program; thus, objdump would have sufficed too.

  • Looking into start() procedure (which was not stripped or the OS’s executable loader would have trouble) I knew main() was probably at 0x08048A24.
  • Looking into 0x08048A24, I saw very standard printf(), read() from stdin, strlen() and strncpy() calls. Instructions from 0x08048AE9 to 0x048B0E were of interest, in combination with instructions at 0x08048B5A and 0x08048B61. Basically, the program near calls to absolute indirect address based on the first byte of the input: if it was 'C', then the program would eventually call 0x080487CC. If it was 'N', then 0x08048743. If it was 'A', then 0x0804882C. Otherwise, the program would infinitely loop at 0x08048B2A and 0x08048B2B.
  • If I was a security analyst, I would be very interested in procedure in 0x0804882C, as it contained a hell lot of system calls related to network operations to However, I guessed that the quiz should be solvable without internet connection, so I skipped the procedure at all.
  • The procedure at 0x080487CC only tried to write a file in /tmp/ with string "woot!", statically. Not interesting.
  • The procedure at 0x08048743…well! The moment I saw how the procedure ended with non-standard epilogue, I knew I was probably hitting the jackpot. The push and pop instructions were basically destroying the stack frame and, instructions in 0x08048768 and 0x08048765 were actually trying to jump to the instruction stored in the memory address stored in ebx — very much what a standard malware would do.
  • Here was where things came a little bit twisted. To determine where the program was jumping to at 0x08048769, I needed to (statically) analyze how the content of ebx was generated, and…I was simply too lazy to do that. Dynamic analysis to the rescue! I hooked up gdb on the program, set a breakpoint at 0x08048769, and stepi from that point on.
  • Of course, this program was trying to be like a malware, and hooking up gdb on it directly would not work. On execution, it would say "Dude, no debugging ;-)". No worries, as this could be quickly circumvented — in IDA, the string was soon discovered to be printed out in procedure beginning at 0x080489E3, which was in turn called in main() in the call instruction at 0x08408A30. As the 0x080489E3 procedure didn’t return a “success” value, the very easy way was to patch the call instruction with nop instructions. And it worked like a charm…and now the program now allowed gdb into its life.
  • The breakpoint set at 0x08048769 and subsequent stepi commands in gdb would reveal that the program continued its execution onto 0x0804876c and 0x0804879d. From there I immediately saw a very eye catching strcmp() call with the string stored in the address pointed to by what was stored in ebp+8, against the string "@EHJ~@DZEL". Well, I believed I was over half way through the quest…
  • Anyway, I still didn’t want to study the instructions to see how the string argument in strcmp() was generated. Therefore, I went to set a breakpoint at 0x080487B2, and when the program execution reached that point, I printed out the (DWORD) content of the address pointed to by ebp+8 (it would later turn out that the address would always be 0xffffd161, some address in the stack, invariant to the input string), then ran the program with different sets of simple inputs, such as "Na", "Naa", "Naaa", "Nb", "Nbbb"…etc.
  • I soon realized that each character after 'N' in the input string would be mapped to another character by the algorithm. I also discovered that the mapping was invariant to the position of the character, and was also invariant to the length of the string. So as long as I figured out the mapping algorithm, I would just reverse the mapping to "@EHJ~@DZEL" and would be able to figure out the key…except that I didn’t want to make efforts to figure out the mapping algorithm by studying the instructions.
  • But wait, being a CS major, I knew how to program. Based on the invariants above, I guessed I could generate a look-up table of the mapping. I guessed that since the program was to run in the terminal and the input should be in ASCII7, I could write a tiny script to feed all the possible two-char ('N' plus one printable ASCII7) inputs to the program, and dumped the (DWORD, but WORD or BYTE would also suffice) content of the memory at 0xffffd161, to see the mapping. This could be automated with the following .gdbinit file:
    file ./reverse-challenge
    b *0x080487b2
    x 0xffffd161

    and this little Python script:

    import subprocess

    for o in range(32, 127):
      s = 'N' + chr(o)
      proc = subprocess.Popen('gdb', stdout=subprocess.PIPE, stdin=subprocess.PIPE)
      result = proc.stdout.read()
      result = result.split('\n')
      for res in result:
        if res.find('ffffd161') != -1:
          print res
      print s
      print "--"

  • And I ran the Python script. Viola! The mapping seemed to map, perfectly one-to-one and on-to, printable ASCII7 chars to printable ASCII7 chars, and I got the look-up table. What else do I need to say now?

In all, it was pretty easy once I patched the program to circumvent its anti-debugging mechanism. The quiz would be a lot harder if the anti-debugging mechanism could not be so easily to circumvent as patching 5 bytes of instructions, and if the mapping algorithm depended on the position of the character to be mapped, and the string length of the key. These would probably force me to study the instructions of the algorithm, which, as you can see now, was something I didn’t do at all 😉

Finally, I am curious what the network operations at 0x804882C are actually doing. It appears that it tries to write "ok\n" to and write the response from the server to a /tmp/ file, but I’ve never seen any response from the server. Hmmmm…!

Read More

Bash shell script to do cross-file/directory full-text searching on Messenger Plus! chat logs

Well, even though MSN Messenger, Windows Live Messenger, and Messenger Plus! (whether Live or not) have been retired, I, and probably most of the users of the software, still keep a huge collection of decade-long chat logs saved by Messenger Plus!. And sometimes I’d still want to do cross-file full-text searches in order to dig interesting, useful, or rememberable-but-forgotten chats out of the hundreds of megabytes of the logs. This wouldn’t too much a problem if I still had WLM + MP!L installed on Windows, since MP!L had a built-in chatlog viewer that was able to do cross-file/directory full-text searching; yet, as I have almost fully transitioned to OS X, I have to build my own tool for the purpose.

And here’s the bash shell script that does the job. Since MP! saved logs in Windows’s “Unicode” format (which was UCS-2 pre-Windows 2000, and UTF-16 since Windows XP), encoding conversion would be needed.


find . -name "*.xml" -o -name "*.txt" -o -name "*.html" |
    while read file
        grep_out="$( iconv -f UTF-16 -t UTF-8 "$file" | grep --color=always -n -i "$1" )"
        if [[ $? == 0 ]]; then
            printf "\x1b[K--\n\x1b[1;33m%s\x1b[0m\n%s\n--\n" "$file" "$grep_out"
            printf "%s\x1b[K\r" "$file"

Then I’d just have to run the script at chatlog’s directory: ./conv_grep.sh "Patchou was great" and the job is done as how it would be with grep -r "Patchou was great" . .

Note that I’ve never installed Messenger Plus! for Skype and I don’t know if the script would work with chat logs saved from MP! for Skype. And this is only tested on OS X Mavericks.

And some disclaimer for anyone who wants to use the script for themselves: the script is provided AS-IS. Use at your own risk and don’t hold me liable for any data loss. Additionally, I’d discourage use of the script without any knowledge in shell scripting.

Read More

Using cqlengine Python Library to Operate Cassandra in CQL3 (in place of pycassa)

So I have been messing around with Cassandra recently, mainly for evaluation for work’s purposes. This article supposes the readers already have prior preliminary knowledge with Cassandra, and are familiar with its terms such as Column Family, Validator & Comparator, and so on.


Preface: pycassa or cqlengine

As our backend still constitutes of Python codes at large, my first priority was to find a Python library that was up-to-date, compatible with other libraries, and future-proof. Searching on the internet, I got my first choice: pycassa. The pycassa library operates such that Cassandra feels like a conventional key-value database; you set and retrieve objects (in the form of Python dicts) associated with Row Keys. The library automatically does the rest dirty-hand jobs for you, such as figuring out the Row Key Validator, Column Comparator and Column Validator.

One drawback for pycassa is that it appears not maintained actively, and more importantly, it uses the old Thrift protocol with no support of the newer CQL3, while DataStax now apparently favors the latter over the former. As I feared that later versions of Cassandra might drop support for Thrift, or expose new functionalities only through CQL3, I had to ditch pycassa and use the CQL3-enabled cqlengine instead. Well, cqlengine is also maintained more actively, at least judging from its GitHub (not saying that it is flawless though — many times its documentation was not explanatory enough that I had to directly look into its source code to clarify things).


Schematically Converting from pycassa/Thrift to cqlengine/CQL3

Essentially, data schemas in Cassandra may be split into two categories:

The first one is more similar to a key-to-object mapping: you specify the key of an object you want to manipulate, and then you can add, modify, and delete arbitrary fields (“columns”) in the object. Something like this in pycassa…

posts.insert(1234567, {'author': 'admin', 'content': 'this is an example'})

…may be done this way in cqlengine:

class Posts(Model):
    post_id = columns.Integer(primary_key = True)
    author = columns.Text()
    content = columns.Text()
Posts.create(posted_id = 1234567, author = 'admin', contents = 'this is an example')

It is probably reasonable to assume that even though you operate schema-lessly with pycassa, chances are you still have a conceived (and even documented somewhere) structure of your objects. If so, cqlengine’s requirement for a written-out structure in actual Python code might not entail too much burden.

One thing to note: with pycassa/Thrift, it is possible to use other types of column names (i.e. Column Comparators) than strings. With the conversion to CQL3 as illustrated above, this would not be easy to realize (but I can’t really think of a sensible use case for non-string column names in this category of data schema).

The second category is the concept of wide rows (see the context around User_Timelines in this old Cassandra introduction post). For example, if I want to keep a list of posts made by a user, a wide-row Column Family would be conceptually like this in Thrift:

admin ‘2013-11-19 09:26:04’ ‘2013-11-19 14:33:27’ ‘2013-11-20 11:16:35’ ‘2013-11-28 19:47:58’
1046 1049 1167 2053

Note that one row can contain an arbitrary number of columns, thus the name of wide row. And pycassa code to insert such data (which is not structurally different from that of the first category):

users_posts.insert('admin', {'2013-11-19 09:26:04': 1046,
                             '2013-11-19 14:33:27': 1049,
                             '2013-11-20 11:16:35': 1167,
                             '2013-11-28 19:47:58': 2053})

Note the usage of timestamp as column names (Column Comparators). This means columns in the row will automatically be sorted by such timestamp (maybe the time when posts are published).

Now, in CQL3, the conceptual view of the wide row would have to be transposed. Let’s look at the transposed view in cqlengine/CQL3 first:

admin ‘2013-11-19 09:26:04’ 1046
admin ‘2013-11-19 14:33:27’ 1049
admin ‘2013-11-20 11:16:35’ 1167
admin ‘2013-11-28 19:47:58’ 2053

And the schema in cqlengine/CQL3:

class UsersToPosts(Model):
   user_name = columns.Text(primary_key = True, partition_key = True)
   posted_time = columns.DateTime(primary_key = True)
   post_id = columns.Integer()

And to populate data for such:

UsersToPosts.create(user_name = 'admin', posted_time = '2013-11-19 09:26:04', posted_it = 1046)
UsersToPosts.create(user_name = 'admin', posted_time = '2013-11-19 14:33:27', posted_it = 1049)
UsersToPosts.create(user_name = 'admin', posted_time = '2013-11-20 11:16:35', posted_it = 1167)
UsersToPosts.create(user_name = 'admin', posted_time = '2013-11-28 19:47:58', posted_it = 2053)

It is worth noting that under this wide-row scenario, a common use case is to query a range of column, which requires the columns to be sorted as said above. For example, we might want to look up posts that some user has made in October 2013. In pycassa/Thrift such range filtering is performed transparently, as long as you get the types of the column names (Column Comparators) right (e.g. use DateTime for that purpose). In CQL3, such ordering has to be explicitly defined (which gives you better flexibility, IMO). The cqlengine model above already takes this into account, so it is possible to do the query through the following code:

q = UsersToPosts.objects(user_name = 'admin')
items = q.filter(posted_time__gt = columns.DateTime(datetime(2013, 10, 1, 0, 0, 0))).filter(posted_time__lt = columns.DateTime(datetime(2013, 10, 31, 23, 59, 59)))

There is still a lot to talk about when primary keys, partition keys, clustering keys, secondary indexes are involved in schemas to achieve optimal performance and support specific kinds of queries, which will not be covered in this post.

Bonus: CQL3 provides “collection” column types (map, list, and set, much like C++’s STL and Python) that you can populate one column with an arbitrary number of elements (of course, the type of elements (and the type of keys for map) still need to be pre-defined). Downside is that collection columns cannot be indexed — as of yet — and you have to know your use case not to misuse it (such as retrieving the whole bunch collection only to read the content of one element).


What about “Key-Value” Performance with Schema-ful Tables?

The requirement to define a schema in CQL3 seems to offset the virtues of Cassandra; after all, being able to store arbitrary structures of data without the need to pre-define their schemas are one of the main selling points of key-value databases. This is in part because in a RDBMS, altering the schema of a table when it’s already populated with a hell lot of data is a time-consuming job, and failure to achieve atomicity for it usually results in catastrophic data loss.

Reportedly, that’s not the case with Cassandra: with Cassandra/CQL3, altering the schema of a data-ful Column Family is said not to have significant performance penalty. Here’s my own experiment (on a virtual machine with 2GB virtual RAM & 2 virtual cores, with 5400rpm physical HDD, 8MB physical RAM & i7-3615QM, and latest Cassandra & MySQL versions):

  • Changing the schema of a CQL3-based Column Family with two columns (text + integer, with text being the Primary Key), already populated with 600K rows, to five columns (+ uuid, text, datetime): 0.368 seconds
  • Changing the schema of a MySQL InnoDB table with two columns (varchar(36) + integer with varchar being the Primary Key), already populated with 600K rows, to four columns (+ text, datetime): 2.2 seconds

Now, people should convince themselves that the difference would probably scale up, and altering table schema would not be prohibitive in CQL3 as in ordinary SQL RDBMS. In addition to this, I want to talk about the support of collection column types again — if you don’t need contents of some dynamic columns to be indexed, then aggregating them in a suitable collection type is recommendable as you don’t need to change the schema of the Column Family in order to insert new elements.

It’s important to note that the internal storage structure is the same for Thrift and CQL3 (though I doubt it will continue to hold true in the long run), and dynamic columns will always be there regardless of whether you use Thrift or CQL3, which, in layman’s terms, are just protocols. Just do keep in mind, though, that Column Families with CQL3 features are not accessible with Thrift.


Final Thoughts (not so cqlengine/pycassa-related)

At times, I still ask and am still asked about the question: can Cassandra/CQL3 replace MySQL, MSSQL, PostgreSQL, and so on? IMO, the most important thing to think here is: Cassandra with CQL3, although having an query interface much like SQL, is still not a RDBMS. So if you have specific scenarios that depend heavily on RDBMS-specific features resulting from relational algebra (especially when σ and ⋈ are frequently used together), and terms like data de-normalization and eventual consistency sound like a PITA breaking the sacred ACID formality, then Cassandra probably won’t do (I’m not saying that MySQL is a good idea though — curious minds can check out VoltDB and alike).

Yet, many use cases I have seen since my first encounter with database systems actually were not coupled tightly to RDBMS-specific features; moreover, many of them effectively used RDBMS as key-value storage. Under such use cases, giving Cassandra a try may be worthwhile thanks to its relatively more versatility. As for myself… I can’t now wait to deploy an experimental Cassandra server to our working environment to test its potential!

Read More

不用除法或餘數判斷3的倍數 正式版


這題在各大公司面試時應該常常出現,通常用來問使用C-like language及assembly的應徵者。我想,大部分看過不少面試題的人,看到「數」(而且是整數)、「不用除法/餘數」,就有心理準備這題大概要用bitwise operations來做;接下來就列出一些是3的倍數、不是3的倍數的正整數,寫成二進位格式,然後看看這些01排列到底有什麼玄機。


  1. 回想一般寫成十進位的數字進行「判斷是不是N的倍數」的方法。而目前打算使用的方法要套用在寫成二進位的數字上。
  2. 聯想:十進位←→判斷11的倍數 vs. 二進位←→判斷3的倍數。(generalization:N進位←→判斷(N+1)的倍數)
  3. 猜測並嘗試將「判斷十進位數字是否為11的倍數的方式:(奇數位數字相加) – (偶數位數字相加) = 11的倍數(或0)」轉化為「判斷二進位數字是否為3的倍數的方式:(奇數位位元相加) – (偶數位位元相加) = 3的倍數(或0)」。
  4. 寫幾個數字來測試。雖然還是要寫幾個數字的二進位來測試,不過學過電腦科學理論的人都知道,對很多問題來說,要verify一個proof/hypothesis,遠比要give一個proof/hypothesis來得簡單得多唷。


  1. 這方面也還是要對bitwise operator熟,才不會出錯(呃…不過真的寫得很naïve的話也就只是用到 >> 跟 & 而已就是)
  2. 另外還要注意相減的結果可能是負數,如果那個結果沒正確存在signed integer,或沒經過處理就直接做bitwise operation,可能也會死得滿慘的(假設面試官給了「不保證使用2’s complement」或是「無法預期shift-right instruction是使用sign-extension或zero-extension」這種機車條件,後者會非常重要)。
  3. 「奇偶位相減出來的數字還要再次進行判斷」的部份,大部分的人可能會用recursion寫,但其實可以用iterative loop寫。不過,寫成recursion,如果沒寫成怪怪的樣子,基本上是可以tail-call optimization,如果應徵者能提出這點觀察,就絕對不成大礙。


Read More



  • 在albums.purincess.tw開了使用piwigo的主題式攝影作品集網站
  • 開始使用AWS Cloudfront當CDN。像是albums.purincess.tw、blogs.purincess.tw、Mnjul’s Intimate Home,還有一些其他網站的resource,都掛在這下面。
  • 另外也把一些常用的js改為從Cloudflare的cdnjs拿。
  • purincess.tw的圖案增加了HiDPI的支援,然後也完成了粗略的mobile版本。code還是很亂就是。
  • CV的banner也支援HiDPI了,然後也新增了responsiveness。啊,不過我沒在找工作啦。
  • 在purincess.tw跟mnjul.net使用了less來取代一些css。因為less可以做成純client-side,deploy的成本比較低。
  • 修了一些無名相簿網站的bug,現在載入進度會正常顯示。然後也在相片頁顯示在無名的原始連結(雖然說做完不久就覺得Yahoo應該要把無名收了…)
  • 無名相簿網站跟Mnjul’s Intimate Home現在都是平行預載xml啦swf啦,所以預載時間短多了。


Read More

In the Making of Mnjul’s Intimate Home 4931 (VIII)

上一篇是Development Notes或是平常的文章類型喔。


  • 成功地把上次說不能embed的华文仿宋跟黑體給做成webfont了。不過IE還是看不到。至少Chrome(Windows或OSX)可以。
  • 然後也重新調整了Gabriola的一些glyph。
  • 預載畫面現在平行發request。不過其實大部分瀏覽器都有限制同時進行XMLHTTPRequest連線的數量,所以也不是那麼平行啦。還是多少快一些就是。
  • 因此,第二階段預載畫面現在會一口氣把所有要預載的項目顯示出來,然後再有些微的動畫顯示預載完成。因為空間的關係,改成兩欄顯示。然後第二階段預載畫面本來有點顯示進度軸底色的bug,也修了。
  • HiDPI(也就是Retina Display)的支援!首先改了favicon,然後網頁有用到的圖案,都提供兩倍大的檔案。下雨的背景也支援兩倍解析度。
  • 做了一些有的沒的的JavaScript跟jQuery的最佳化。
  • 些微workaround了Chrome上面背景音樂在網頁載入10分鐘之後就會斷掉的情況。現在是斷掉的話就從剛剛斷掉的地方重播(所以會空個一兩秒)。
  • 把一些東西放上AWS的Cloudfront CDN(實際上只有webfont跟jQuery而已這種比較不會動到的東西)。(正好可以稍微繞過瀏覽器同時XMLHTTPRequest連線數量的限制。)

Read More