About Java Native Programming(JNI) Local Reference

Today I needs record some experience about Java JNI programming,

More accuratly, it’s about the local reference in JNI.

first of all, about JNI’s reference , see this link for a basic knowlege: (http://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ibm.java.win.80.doc/diag/understanding/jni_refs.html).

In this post, I’m going to talk about:
1) when JNI function will generate a local reference.
2) when the local reference will be release automatily by java vm.
3) what issue might happens about it.

1. When JNI function will generate a local reference.

Basic rule is simple:
1) the function not return a GlobalReference / WeakGlobalReferece
2) the function will return a point named in follwing list:

jobject, jclass, jstring, jarray, jobjectArray, jbooleanArray, jbyteArray, jcharArray, jshortArray,
jintArray, jlongArray;, floatArray, jdoubleArray, jthrowable, jweak.

actually , this list is copy from jni.h , there is a section named /* Reference types */

When these two rules apply, it will generate a LocalReference.

2. when the local reference will be release automatily by java vm.

In java document, said that the local Reference will be freeed after *native* call.

but what’s is the native call, let’s give a example.

A.java : have one native function:

native void method_native();

Jni.cpp: have 3 function:

JNIEXPORT void JNICALL Java_com_example_ClassA_A_method_native(JNIEnv *env, jobject thiz);
void cpp_function_1(JNIEnv *env, jobject thiz);
void cpp_function_2(JNIEnv *env, jobject thiz);

in cpp_function_2() you called env->GetObjectClass(this); to do something.

call relationship is:
method_native() -> cpp_function_1() -> cpp_function_2();

in cpp_function_2() it will generate a localReference, it will not release after cpp_function_2() return,
it will actually release after method_native() return.

or, if you have called vm_->DetachCurrentThread(); (vm_ is JavaVM type), the local reference will release too.

3. what issue might happens about it.

So, since java vm will release the reference , why bother to learn this ?

because in some case, you will leak the local reference, or exceed the maximum count of local reference, and cause you program crash.

like, you called GetObjectClass() function in a worker thread in native level, and this thread never return control to java level.
so it leaked.

if you have some loop calling a function will generate a local reference, it will generate local reference inside the loop, even if it will eventually release by jvm after native function return , it will have some chance exceed the maximum count of local reference.

So, as a summary , the good practice is release the local reference by DeleteLocalReference() as soon as possible.

Write Async Test Case by gtest & gmock for C++

Recently, I have a project that needs some async IO function,  which use libuv as backend, and move the original logic in an async way.

The project was ok, but after I have good experience write node.js & Ruby on Rails code, I cannot live without test case for such a complex code.

Thankfully, deal stackoverflow.com give me the answer, which is gtest + gmock.

Let first have some taste about the code.

see the code in this link : gist.

(for security reason, I have ignore some logic part of this code, so it’s cannot compile).

class: AsyncIOHttpCallback {} <- This is the callback class, you should define your callback interface in a virtual class, can be impl the policy by caller.

class: MockCallback which publicly inherit AsyncIOHttpCallback class, and  implement this class  member function by:

MOCK_METHOD1(onError, void(CURLcode errCode));
MOCK_METHOD0(onTimeout, void());
MOCK_METHOD2(onData, void(char *data, size_t datalen));
MOCK_METHOD4(onFinish, void(double content_length, long header_size, long httpcode, char *redirect_url));

The function like MOCK_* it function inside of google mock, just match the parameter type and numbers.

OK, mock define part is finished.

let’s see this code:

Create the mock object:

shared_ptr<MockCallback>    mcallback(new StrictMock<MockCallback>);


policy->finishHandler = [&mcallback](double content_length, long header_size, long httpcode, char *redirect_url) {
EXPECT_EQ(httpcode, 200);
mcallback->onFinish(content_length, header_size, httpcode, redirect_url);

The finishHanlder is a std::function<> which can be a functor or lambda, in side this lambda, it will call the mock object. (you can define a simpler interface)

In the end,  we can assert the mock object:

    EXPECT_CALL((*mcallback.get()), onFinish(_,_, _, NULL)).Times(1);

This means, we expect this callback once, and with some parameter we, “_” is a special function, which means this is something we don’t care, and all other values will be checked by gmock. If not match, some warning message will printed, and reported.

If the callback was not called, it will catch this error, if some uninteresting function is called, there will be a warning message.

Really useful tool!

There also some other benifits:

  1. gmock can check memory leak for your mock callback object, if it was leaked after test terminal, it will print error message like:

test_aio.cpp:198: ERROR: this mock object should be deleted but never is. Its address is @0x7fe1d0c0ce50.


If you want to know more about gmock, check google’s gtest github: link



Understanding the function call and recursion …

Have read a serial of post about the stack frame, that’s some thing basic on native development.

  1. Journey to the Stack, Part I
  2. Epilogues, Canaries, and Buffer Overflows
  3. Recursion: Dream Within a Dream
  4. Tail Calls, Optimization, and ES6
  5. Closures, Objects, and the Fauna of the Heap

These posts are write by Gustavo Duarte, and all are describe with figures and funny examples, worth to spend some time to read.

Art of padding

Sharing some  links which I feel very useful when understanding the padding in kernel’s code, you usually see those padding skill in some high performance application like 3D gaming.

Eric.S.Raymond have great Article about C struct padding, how it affect the actually memory size of your structure:

And there is a article from intel performance guide about how padding fix the “false sharing” issue in a SMP programming model.


The link below is belongs to Intel Guide for Developing Multithreaded Applications  which also valuable to read.

This link is from roguewave ‘s manual, which also have great detail about how actually memory layout by padding: link


推荐一个画UML利器 – PlantUML

去年换了工作, 换了城市, 一切都发生很多的变化。 现在看来虽然不是所有都尽如人意, 但是已经走上了正轨。

新工作需要话UML图, 主要是做分享和文档的时候。

就像很多人一样, 我开始去下载一些画图工具, 一个框框一个框框的画。 不知道你们有没有感觉, 反正我是画完一个图以后就不想再来第二个了。

其实仔细想想这样的画图方式是和程序员的思维方式相左的, 程序员更加习惯的是由代码产生这样的图。


1. PlantUML:
此工具是一个Java编写的工具, 它可以把像这样的文字

Bob->Alice : hello


这里只是用一个最简单的例子, 当然它还支持Emacs,以及很多插件等。

网址: http://plantuml.sourceforge.net/index.html


我的Emacs Theme设置

TD; DR   我的Emacs Theme部分的设置主要有两个特色, 一个是可以根据热键循环选择主题, 另外一个是根据打开编辑器的时间(白天还是夜里)来选择一个默认的主题.

今天来看看我的Emacs中的theme 设置, 话了很多时间来来回回换主题, 总是觉得不同情况下使用不同的Theme比较顺眼, 所以我的dot emacs中的Theme设置是比较复杂的. 下面就来说说, Emacs版本都是24.

除了系统自带的那些主题外, 还使用了一些从VIM和Textmat中拿过来的theme, 使用package-install很容易就安装上了.

  (package-install 'zenburn-theme)
  (package-install 'twilight-theme)
  (package-install 'twilight-bright-theme)
  (package-install 'solarized-theme)
  (package-install 'monokai-theme)


这个函数主要是重载load-theme命令,让其先去掉前面的主题, 然后再应用新的主题, 要不然会把两个主题混在一起.

;; Auto disable theme setup before...
(defadvice load-theme
  (before theme-dont-propagate activate)
  (mapcar #'disable-theme custom-enabled-themes))


(defun reset-theme-list() 
  (setq all-themes '(twilight twilight-bright adwaita zenburn solarized-dark solarized-light monokai))
  (setq valid-themes all-themes))

这个函数主要是一个一个的来循环主题列表, 使用current-theme这个变量来看自己已经到了哪个主题.

(defun looping-select-theme()
  (if valid-themes
	 (setq current-theme (car valid-themes))
	 (setq valid-themes (cdr valid-themes))
	 (load-theme current-theme t)
	 (message "Current Theme is: %s" current-theme)
    (disable-theme current-theme)))

下面这个函数是主题的初始化函数, 主要是根据当前的时间来添加一个默认的主题. 现在是主要由(and (< hour 16) (> hour 8) 这条来控制, 主要是说在上午8点到下午4点之间使用白天主题, 也就是adwaita, 晚上就使用twillight. 如果你不喜欢不同时间选择不同背景, 可以去掉下面的let那个表达式就可以了.

(defun color-init()
  (when (not (boundp 'current-theme))

    ;; below let s-list will setup the default theme base on current time, if it was in day light, use a white background, else, use a dark background.
    (let ((hour  (caddr (decode-time (current-time))))
	  (day-theme 'adwaita)
	  (night-theme 'twilight))
      ;; between AM8 and PM4, use blank background have some reflection.
      (if (and (< hour 16) (> hour 8))
	  ;; add a day theme in the front of theme list, make it be the default theme.
	  (when (not (eq (car all-themes) day-theme))
	    (setq valid-themes (cons day-theme valid-themes)))
	;; like above s-list.
	(when (not (eq (car all-themes) night-theme))
	  (setq valid-themse (cons night-theme valid-themes)))))

    ;; first time call this will use first theme.

在dot emacs里面要添加下面这两个.

(global-set-key[M-f9] 'looping-select-theme)

OK了, 这样就可以按Alt-F9来切换主题, 并且打开Emacs的时候会根据当前的时间来选择不同的主题.

Profiling Ruby on Rails App

I think everyone is enjoy ruby’s programming, the meta programming is really efficiently way. But sometime we meet our function or app have a poor performance.

Guess what, my first through was, ruby is f**k too slow.. , but my second through was, how can I figure out where is slow, rather than switch some other language?

So let’s start some profiling to figure it out.

add this in your Gemfile or gem install it.

gem 'ruby-prof', :git => 'git://github.com/ruby-prof/ruby-prof.git', :group => [:development, :test]

Add this function in your ApplicationController.rb

 def profile(prefix = "profile")
    result = RubyProf.profile { yield }

    dir = File.join(Rails.root, "tmp", "performance", params[:controller].parameterize)
    file = File.join(dir, "callgrind.%s.%s.%s" % [prefix.parameterize, params[:action].parameterize, Time.now.to_s.parameterize] )
    open(file, "w") {|f| RubyProf::CallTreePrinter.new(result).print(f, :min_percent => 1) }

  helper_method :profile

You can put what you want profile in this block, and it will generate callgrind file under your tmp/performance/[controller-name] folder.

If you’re under mac, you can install this tool to check the function profiling:

brew install qcachegrind

other platform have some equal tools like, kcachegrind under linux.

you will got the profile result like this picture.

屏幕快照 2014-06-12 下午12.53.38

Hope this will help you figure out which part is cause the performance issue.

And my other little trick is, like the :profile helper, I add a simple timer helper:

 def timer(tag = "default")

   t1 = Time.now
   t2 = Time.now
   msecs = (t2 - t1) * 1000.0
   logger.info "Time in profile #{tag} #{msecs.to_i} ms"


 helper_method :timer

Together with code block, you can get each part ‘s timer profile like below:

timer("total") do 

  timer("func1") do 


  timer("loop") do
      some_array.each { function }


Seems too naive, but this simple method can be more helpful than other complex tools.

Rubber Deployment Notes on EC2 for Rails App.

I need deployment my rails app on EC2 recently,  actually I use Heroku for a while, Heroku is excellent for development, but the price is not very well compare to EC2, also it’s was little slow when access from China.

So I decide to deployment it to EC2, unlike Heroku, Setup  EC2 require a lots of setting, which is headache for every one. But I found Rubber, and excellent tool for setup environment on EC2, it based on Capistrano.

From these link1, link2,  I can start to do with rubber.

In my first try, I setup all db, app, web, all all server to a single EC2 instance, but there is a lot minor issue, record them here:

1. in rubber.yml, make sure set correct private net mask, default was, but in some AWS Region, it was become like, make sure it added to the list, otherwise, your app instance maybe can not access your db instance.

2. When build redis, I found it become build failed, like depends on jemalloc.a not been found, the fix is redis build Makefile sucks, and it need add following like into deloy-redis.rb (between two > )

>           tar -zxf redis-#{rubber_env.redis_server_version}.tar.gz
          # Build the binaries.
          cd redis-#{rubber_env.redis_server_version}
          cd deps; make hiredis jemalloc linenoise lua; cd ..  # to fix the build error.
>           make

3. postgresql, because my postgresql require uuid-ossp, I have add following task warper in config/deploy.rb

namespace :rubber do

  namespace :project do

    before "deploy:migrate", "rubber:project:add_pg_superuser_and_enable_hstore"
    after "deploy:migrate", "rubber:project:remove_pg_superuser"

    task :add_pg_superuser_and_enable_hstore,
         :roles => [:postgresql_master, :postgresql_slave] do
      alter_user_cmd = "ALTER USER #{rubber_env.db_user} SUPERUSER;"
      create_ext_cmd = 'CREATE EXTENSION IF NOT EXISTS "uuid-ossp";'
      rubber.sudo_script "add_superuser_create_hstore", <<-ENDSCRIPT
      rsudo "sudo -i -u postgres psql -c "#{create_ext_cmd}""


    task :remove_pg_superuser, :roles => [:postgresql_master,
                                          :postgresql_slave] do
      alter_user_cmd = "ALTER USER #{rubber_env.db_user} NOSUPERUSER"
      rubber.sudo_script "add_superuser_create_hstore", <<-ENDSCRIPT
        sudo -i -u postgres psql -c "#{alter_user_cmd}"


Also need this package change in rubber-postgresql.yml : 36

packages: [postgresql-client, libpq-dev, postgresql-contrib]

4. About figaro

If you’re save your secret in ENV var like me, you maybe want to use figaro,
but when you use github to deploy code, the application.yml is not check in to repo, so you need add a task in your config/deploy.rb

# This task make sure application.yml will copy after git.
namespace :figaro do
  desc "SCP transfer figaro configuration to the shared folder"
  task :setup do
    transfer :up, "config/application.yml", "#{shared_path}/application.yml", via: :scp
  desc "Symlink application.yml to the release path"
  task :symlink do
    run "ln -sf #{shared_path}/application.yml #{latest_release}/config/application.yml"
  desc "Check if figaro configuration file exists on the server"
  task :check do
      run "test -f #{shared_path}/application.yml"
    rescue Capistrano::CommandError
      unless fetch(:force, false)
        logger.important 'application.yml file does not exist on the server "shared/application.yml"'
after "deploy:setup", "figaro:setup"
after "deploy:finalize_update", "figaro:symlink"

5. Github ssh.

Because you maybe using a private github account to do deployment, you need to use ssh auth, but you don’t want to the private put in every EC2 instance.

So you can consider use ssh agent forwarding to do this job, you need 3 steps to make it happen:
a)   add following to your ~/.ssh/config

Host *.compute.amazonaws.com
  ForwardAgent yes

b) use this command to generate a new key for the deployment.

ssh-keygen -f ~/.ec2/github_key

c) add setting in your config/deploy.rb

ssh_options[:forward_agent] = true
ssh_options[:port] = 22
ssh_options[:keys] = [File.join(ENV["HOME"], ".ec2", "github_key")]

Don’t forgot change the scm part:

# Use a simple directory tree copy here to make demo easier.
# You probably want to use your own repository for a real app
set :scm, :git
set :repository, "git@github.com:xxxx/yyyy.git"
set :deploy_via, :remote_cache

After all, you can use github to deploy your new code.




想想第一次接触Emacs是在2005年的时候, 还是yangxi同学推荐的。 自从那个时候, Emacs就一直是我主要的工具。 编程环境从Linux, Win, 到Mac, 变了很多。 从写C,到后来写C++, Java, Objective C, Python, Ruby。

后来浅浅的尝试过许多编辑器, IDE, 却都没有Emacs用的顺手。 知道后来得了Emacs Pinky, 也没有变过。

用Emacs就不得不说一下按Control健得方法, 一开始是用小拇指按, 慢慢得变成用手掌按Control键, 后来买了HHKB以后, 就把键盘得Caps键该成了Control。 知道后来小拇指得疼痛已经让我无法正常工作以后, 才意识到不能用换Caps键。 还是回到了用手掌按Caps键得老方法。 小拇指现在不是很疼了。
当然, 买个脚踏板应该是最终解决办法。 :)

瞎扯了这么多, 还是回到正题上面来。
Emacs在Linux下面是非常的快, 用最新的Emacs24很爽。
但是当我现在主要的编程环境编程了Mac OS以后, Emacs安装那个包就是一个复杂的问题了。

一开始用的都挺好的, 直到Mac到了10.9以后, 这个Emacs会有严重的内存泄漏问题造成CPU 100%, 以及在本本休眠唤醒以后, 造成风火轮不停转, 必须切断电源的问题。 经历过几次强行断电之后, 不得已只有换了。

Aquamacs: Emacs for Mac OS X: 
但是这个Emacs速度不快, 能够明显感觉到迟滞。还有它对一些扩展包的兼容性并不是太好。 许多很方便的包都不能用。

于是在某天的晚上, 实在觉得应该磨磨刀的时候, 决定试试新的方法。
根据这个wiki里面介绍的, 使用homebrew 安装。如果你和一样喜欢用最新的版本,就可以使用下面的Brew命令安装:

brew install emacs --cocoa --HEAD --use-git-head --srgb

因为Homebrew是下载源代码, 在本机做编译, 惊喜的发现, 编出来的Emacs速度相当的快, 已经可以和Linux下的媲美了。

我推测原因可能是因为本机是安装了最新的Xcode, 一些基础库用了Clang做编译, 也许运行时的兼容性更好?

原因是很难查出来的了, 但是作为一个经验, 还是觉得非常有用。 编辑器的快慢影响的不知是打字的速度, 还能够最小的打断写代码时候的思路。