4.06.2009

Android HAL

有人問說有沒有一個標準的方法來加一個driver 給上層android AP用,答案是沒有。



如果我要建一個新的device, 符合Android HAL, 我知道, 我應該使用下列方法的其中一個:
1. App - Runtime Service - lib
2. App - Runtime Service - Native Service - lib
3. App - Runtime Service - Native Daemon - lib

If I want to create a new device, conforming to Android HAL, I should
use one of the following approaches:
1. App - Runtime Service - lib
2. App - Runtime Service - Native Service - lib
3. App - Runtime Service - Native Daemon - lib


但是,除了現存的devices 以外, 要改所以需要改的檔案,來增加一個新的device,似乎不容易。

我想要寫一個user-mode 的driver,而不是用kernel-mode driver model.
然後想要讓它可以在enulator 上跑。

Except for the existing devices, it is not easy to identify all the
files which need to be modified for adding a new one.
I am also interested in writing a User-Mode Driver, instead of using a
Kernel-Mode Driver model.
For now, I want to make it work in the emulator.


所以,我加了一個假的device ,有點像是sensor device。然後,我試著建立一個library (就叫fake)來處理所有必要的功能。然後把它放在 /hardware/libhardware/fake, /hardware/libhardware/include/fake, 另外還有Android.mk 放在裡面,Android.mk相關連到/dev/fak 和 /dev/input/fake(第一個for data, 第二個for 控制)

My fake device will be similar to "sensor" device.
I tried to create a library calling it "fake" and managing all
necessary features.
I put them into /hardware/libhardware/fake, /hardware/libhardware/
include/fake, with its Android.mk.
It relies on /dev/fake and /dev/input/fake (one for data, the second
one for control).


第一個問題,在Android Project Group裡有標準的方式加一個新的device嘛(kernel and/or user mode)?(檔案該放的目錄,如何加makefiles, 等等? device /dev/input/compass 是哪個driver再管理的?)有沒有一個stub 是專門再加新的dvice 及相關的Manager/Services,給上層的Application?

First question. There's a standard way in Android Project Group for
adding a new device (kernel and/or user model)? (Folders where files
should be placed, tips for makefiles, etc? The device /dev/input/
compass is managed by which driver?)
There's a stub for implementing a new device and related Manager/
Services, to higher levels up to Application?




The abstraction layers in Android are admittedly inconsistent. Audio
and camera both use a C++ pure virtual interface (e.g.
AudioHardwareInterface.h), while other places like LED's and blitter
functions have a C struct of function pointers.
Android 的抽像層不可否任地是前後茅盾的。Audio 和camera 都是C++純虛擬介面,但其它地方像LED 和blitter 就有function pointers 的 C 結構.

Because we run the actual device image in the emulator, any devices
that aren't backed by a kernel driver in the emulator need a fake
device (such as camera). For audio, we have a kernel driver for the
emulator, but other devices like the G1 have a user space layer on top
of the kernel driver. In this case, for the emulator, there is a shim
that translates AudioHardwareInterface calls to kernel calls.
因為我們是在模擬器上跑一個實體device 的image,任何在模擬器背後沒有kernel driver 的devices 都需要一個假的device(像是camera)。對audoi來說,我們有kernel driver在模擬器上,但是其它的device 像是G1 在kernel driver 的上方有個user space 層。
就這個case 來說,就有個小程式轉換AudioHardwareInterface 到呼叫kernel calls.


How you surface the driver features to the application will depend on
what you are trying to do. Most hardware features are abstracted by a
native service, for example Surfaceflinger for 2D graphics,
AudioFlinger for audio, CameraService (because CameraFlinger just
sounded wrong) for the camera. This allows us to enforce security
using the binder interface and to abstract away a lot of differences
so that applications don't have to be written to work with specific
hardware.
要如何寫driver 功能的介面給application ,取決於你打算怎麼做。大部份的硬體功能都有抽象層在現有的service裡。例如 2D graphics 有 SurfaceFlinger,audio 有AudioFlinger, camera有CameraService(因為CameraFlinger聽起來很奇怪)。在這樣的架構下,讓我們在用binder interface 時有一定的安全性,並將大部份的不同性抽象化,好讓applications 不用管硬體


I haven't looked at the compass code, but I can take you through the
camera as an example.
我還沒實際去看這個部份的code, 但是我可以給你一個以camera 為例子的例子,來說明


At the top of the stack is android.hardware.Camera, which is the Java
object that the application instantiates when it wants to take a
picture. This is a thin wrapper around android_hardware_Camera.cpp
which is the JNI interface. This in turn is a wrapper around libs/ui/
Camera.cpp which is the proxy for the remote object in the camera
service. And yes, libs/ui is a strange place for it, but it has some
circular dependencies on Surfaceflinger, so that's where it ended up.

在最上層是 android.hardware.Camera, 就是當application 拍照時會跑的Java object。它就是個小的wrapper在 android_hardware_Camera.cpp,也就是JNI介面。接下來,有另一個wrapper 在libs/ui/Camera.cpp,這個cpp是一個代理,專門for camera service 的remote object用。沒錯,放在libs/ui是蠻奇怪的,不過沒辦法,有些環境相依性的問題關於Surfaceflinger, 所以就這麼定了。


Now it gets interesting because there is a binder interface called
ICamera.h (pure virtual) which is implemented in ICamera.cpp which is
the marshalling code for the IPC binder calls. Let's just take it on
faith that the calls from the client result in a marshalled
ICameraClient object appearing on the server side of the interface.
Upon establishing a connection - provided that the application has
permission to use the camera - the camera service instantiates a
CameraHardwareInterface derived object, which is the HAL for the
actual camera hardware. The camera service takes care of preview
display and other low-level house-keeping functions that would be
difficult to do in a Java app.
現在,有趣的是,有個binder interface 叫做ICamera.h (純虛擬的),它被寫在ICamera.cpp裡,而ICamera.cpp 是marshalling code for IPC binder 呼叫的。我們只要相信,從client 來的呼叫,會導致一個marshalled 的ICameraClient object 顯示在server端的介面上。在建立連結上(所謂連結就是application 提到許可來使用camera),camera service會具現化 CameraHardwareInterface 而取得object, 這個object 就是實體camera 硬體用的HAL。而camera service 會處理preview 顯示,還有其它低階的內在功能(Java app會比較困難處理的部份)。


The camera hardware driver can either be implemented as a kernel
driver with a thin user space shim driver, or it can be implemented as
a user space driver with a thin kernel driver (primarily to control
access).
Camera hardware driver 可以寫成一個kernel driver 跟一個小user space 的driver, 或是寫成user space driver 跟一個小kernel dirver(主要是控制存取部份).


This is one of the more complex device models. Other devices (like the
LEDs) have a much simpler model.
這是比較複雜的device models的其中一個。其它的devices(像是LEDs)就有比較簡單model



>如果我要建一個新的device, 符合Android HAL, 我知道, 我應該使用下列方法的其中一個:
> 1. App - Runtime Service - lib
> 2. App - Runtime Service - Native Service - lib
> 3. App - Runtime Service - Native Daemon - lib

這要看形況,大部份的流程會像這樣:
app -> java api (manager) -> java service -> HAL module -> kernel

It depends, most of the time you'll need to do something like:

app -> java api (manager) -> java service -> HAL module -> kernel


有時候,以sensors為case來說,會有一個額外的native daemon, 但是在這個特別的case裡, it's an implementation detail (the sources for that daemon are proprietary).(看不懂什麼意思:p)

Sometimes (it's the case for the sensors), there is an extra native
daemon, but in this particular case, it's an implementation detail
(the sources for that daemon are proprietary).


有時候,native service 可以直接替換到java service,這要歸功於binder interfaces(它在程式語言上是很彈性的,所以service 可以被寫成java to c++,而不會破壞相容性,反之亦然)

Sometimes, the "java service", can be replaced directly by a "native
service", thanks to the binder interfaces (they're agnostic to the
languages involved, so a service can even be rewritten from java to
C++ or vice-versa, without breaking compatibility).


在一些cases 下,也是有可能跳過整個java service,如果HAL module 可以處理多個clients 以及相關的權限。

In some cases, it is also possible to skip the "java service"
entirely, if the HAL module can handle multiple clients and all the
permissions involved.


也有可能兩個方式都混在一起並用,以sensors 的例子來說; service 可以跟硬體建立connection , 但是app 卻可以直接跟HAL module 溝通傳資料。

It is also possible to have a mix and match, this is the case for the
sensors; the service is used to establish a connection with the
hardware, but all the data moving is done by talking directly to the
HAL module from the the app.


所以,結論就是,沒一定的規則。這要看你是想達成什麼樣的結果。

So, in short, there are no rules. It depends on what you're trying to
accomplish.

2 comments:

Alleen's Computer Science Notebook said...

大大你好!
可以請問一下你這篇文章的來源嗎?
謝謝!

Arik said...

在我的 android search 裡搜尋 "Android HAL" 就有了,這是在mailling list 裡別人問的