进程间图怎么传递图形buffer
写这篇文章的目的:讲解 进程间图怎么传递图形buffer的
最近研究图形缓存怎么在进程之间传递的,谷歌了所有的博客,发现没人讲的清楚
图形缓存是Android绘制的核心内容,8.0版本后却没有讲清楚明白的。
source.android.com/docs/core/a… 这里handle中有些线索,但没细说。
Android 不同进程间,并不传递图形缓存,而是使用“共享内存”机制操作图形缓存。
但是“共享内存”用到的fd怎么传递的,没人讲清楚
fd 时进程级别的 int 数值,正常情况不同进程的 fd 并不能传递。而 GraphicBuffer 这个对象怎么做到传递 共享内存fd 的?Java层的 Parcel 类有个 writeFileDescriptor 函数,用于传递 fd ,那么native层,hal层又是怎么传递的呢?
本文并不面面俱到,只讲核心内容。需要的一些比较硬的知识储备:
- 了解binder
- 用户层:binder数据传输中,数据对象Parcel类,生成序列化对象过程,以及解包过程。
- 内核层:binder 内核中,对不同数据对象的解析
- 内存知识:了解共享内存,了解mmap,了解内存分配和映射的本质
- 对内存了解的不够深的话,一些地方还是比较难去理解的,并不是单纯看代码看的懂的,这或许是大部分博客讲不明白的原因
- 了解Linux驱动
- 了解Surface 到 SurfaceFlinger 交互过程
大纲
- 一、Surface::dequeueBuffer 代码流程简述
- 二、进程间图传递图形buffer详解
- 【1】SurfaceFlinger进程 和 IAllocator 服务之间传递图形显示的Buffer
- 【2】App进程同 SurfaceFlinger 进程之间传递 GraphicBuffer 对象服务端 requestBuffer 流程
- 【3】linux内核部分,binder驱动对 BINDER_TYPE_FDA 、BINDER_TYPE_FD 类型的处理
- 总结:
一、Surface.dequeueBuffer 代码流程简述
图形内存的分配核心在于 Surface.dequeueBuffer
流程。
- Surface.dequeueBuffer会调用 BufferQueueProducer.dequeueBuffer 去 SurfaceFlinger 端获取BufferSlot数组中可用Slot的下标值
- 这个 BufferSlot 如果没有 GraphicBuffer,就会去new一个,并在构造函数中申请图形缓存,并把图形缓存映射到当前进程
- 同时把 BufferQueueProducer::dequeueBuffer 返回值的标记位设置为
BUFFER_NEEDS_REALLOCATION
- BufferQueueProducer.dequeueBuffer 的返回值如果带有
BUFFER_NEEDS_REALLOCATION
标记,会调用 BufferQueueProducer.requestBuffer 获取 GraphicBuffer,同时把图形缓存映射到当前进程
调用过程
- Surface.dequeueBuffer【App·进程端】
- BpGraphicBufferProducer.dequeueBuffer【接口层】
- BufferQueueProducer.dequeueBuffer 【SurfaceFlinger 进程端】
dequeueBuffer 函数参数outSlot指针带回一个BufferSlot数组的下标 ,返回值返回标记位,但并未返回 GraphicBuffer
dequeueBuffer 函数中,在获取的 BufferSlot 没有GraphicBuffer时,会new一个GraphicBuffer,同时返回值的标记为 BUFFER_NEEDS_REALLOCATION
new GraphicBuffer( width, height, format, BQ_LAYER_COUNT, usage, {mConsumerName.string(), mConsumerName.size()});
- GraphicBuffer 构造函数中会调用initWithSize,内部调用分配图形缓存的代码
- initWithSize(inWidth, inHeight, inFormat, inLayerCount, inUsage, std::move(requestorName));
- GraphicBufferAllocator.allocate
- allocateHelper(width, height, format, layerCount, usage, handle, stride, requestorName, true)
- Gralloc4Allocator.allocate
- hwbinder 服务调用:
- IAllocator::getService()->allocate(descriptor, bufferCount,[&](const auto& tmpError, const auto& tmpStride,const auto& tmpBuffers){...}
- 之后的代码需要看厂家的具体实现,最后无非是调用到内核驱动层分配内存,比如调用ion驱动层分配ion内存
- SurfaceFlinger 和 IAllocator 服务怎么传递 共享内存的,转“进程间图传递图形buffer详解【1】”章节
- 回调函数中调用 IMapper.importBuffer(tmpBuffers[i], &outBufferHandles[i]); // 内部使用 mmap 把内存映射到当前进程
- Gralloc4Allocator.allocate
- allocateHelper(width, height, format, layerCount, usage, handle, stride, requestorName, true)
- GraphicBufferAllocator.allocate
- BufferQueueProducer.dequeueBuffer 【SurfaceFlinger 进程端】
- 在 dequeueBuffer 返回值的标记为 BUFFER_NEEDS_REALLOCATION 时,
- App端需要调用 requestBuffer,获取 GraphicBuffer 对象,
- 同时,把 SurfaceFlinger 分配的图形缓存,映射到App进程
- BpGraphicBufferProducer->requestBuffer(buf, &gbuf);【接口层】
- BufferQueueProducer.requestBuffer-----请求返回 GraphicBuffer 对象
- SurfaceFlinger 进程端 requestBuffer 代码非常简单,仅仅是把 dequeueBuffer 过程中分配的对象赋值给参数 gbuf ,传递给 App
- 那么,图形缓存的 fd 是怎么传到App端的,App又是怎么映射的图形缓存呢?
- 核心在 BpGraphicBufferProducer.requestBuffer 函数中 GraphicBuffer 对象的构建过程:
- status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
- 接下来GraphicBuffer 传输过程,见 进程间图传递图形buffer详解【2】
- *buf = new GraphicBuffer();
- result = reply.read(**buf);
- read 过程会调用 GraphicBuffer.unflatten
- GraphicBuffer.unflatten 函数内部调用了 GraphicBufferMapper.importBuffer
- 内部也是调用IMapper.importBuffer,最终使用 mmap 把内存映射到当前进程
- 调用 mmap 过程,可以参考 hardware/google/gchips/GrallocHAL/src/hidl_common/Mapper.cpp 代码
- cs.android.com/android/pla…
- GraphicBuffer.unflatten 函数内部调用了 GraphicBufferMapper.importBuffer
- read 过程会调用 GraphicBuffer.unflatten
- status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
- BpGraphicBufferProducer.dequeueBuffer【接口层】
二、进程间图传递图形buffer详解
【1】SurfaceFlinger进程 和 IAllocator 服务之间传递图形显示的Buffer
IAllocator 服务全称为 android.hardware.graphics.allocator@4.0::IAllocator/default
高通平台上的进程名为:vendor.qti.hardware.display.allocator-service
SurfaceFlinger IAllocator 接口的 allocate 函数
// frameworks/native/libs/ui/Gralloc4.cpp
status_t Gralloc4Allocator::allocate(std::string requestorName, uint32_t width, uint32_t height,
android::PixelFormat format, uint32_t layerCount,
uint64_t usage, uint32_t bufferCount, uint32_t* outStride,
buffer_handle_t* outBufferHandles, bool importBuffers) const {
//...
//===================关键代码============
auto ret = mAllocator->allocate(descriptor, bufferCount,
[&](const auto& tmpError, const auto& tmpStride,
const auto& tmpBuffers) {// const auto& tmpBuffers 是个 hidl_handle 类型
error = static_cast<status_t>(tmpError);
if (tmpError != Error::NONE) {
return;
}
if (importBuffers) {
for (uint32_t i = 0; i < bufferCount; i++) {
error = mMapper.importBuffer(tmpBuffers[i],
&outBufferHandles[i]);
if (error != NO_ERROR) {
for (uint32_t j = 0; j < i; j++) {
mMapper.freeBuffer(outBufferHandles[j]);
outBufferHandles[j] = nullptr;
}
return;
}
}
} else {
//....
}
*outStride = tmpStride;
});
//...
return (ret.isOk()) ? error : static_cast<status_t>(kTransactionError);
}
复制代码
allocator服务端的hidl接口实现
不看具体的allocate函数实现,重点看数据传输过程
//out/soong/.intermediates/hardware/interfaces/graphics/allocator/4.0/android.hardware.graphics.allocator@4.0_genc++/gen/android/hardware/graphics/allocator/4.0/AllocatorAll.cpp
// 这部分代码是 hidl 接口编译完成后,out目录自动生成的代码,源码目录下没有
// Methods from ::android::hardware::graphics::allocator::V4_0::IAllocator follow.
::android::status_t BnHwAllocator::_hidl_allocate(
::android::hidl::base::V1_0::BnHwBase* _hidl_this,
const ::android::hardware::Parcel &_hidl_data,
::android::hardware::Parcel *_hidl_reply,
TransactCallback _hidl_cb) {
//...
//========================调用服务端真正的实现=====================
::android::hardware::Return<void> _hidl_ret = static_cast<IAllocator*>(_hidl_this->getImpl().get())->allocate(*descriptor, count, [&](const auto &_hidl_out_error, const auto &_hidl_out_stride, const auto &_hidl_out_buffers) {
if (_hidl_callbackCalled) {
LOG_ALWAYS_FATAL("allocate: _hidl_cb called a second time, but must be called once.");
}
_hidl_callbackCalled = true;
//===============函数调用完成后,开始写返回数据========================
::android::hardware::writeToParcel(::android::hardware::Status::ok(), _hidl_reply);
_hidl_err = _hidl_reply->writeInt32((int32_t)_hidl_out_error);
if (_hidl_err != ::android::OK) { goto _hidl_error; }
// 返回数据 tmpStride 的值
_hidl_err = _hidl_reply->writeUint32(_hidl_out_stride);
if (_hidl_err != ::android::OK) { goto _hidl_error; }
size_t _hidl__hidl_out_buffers_parent;
_hidl_err = _hidl_reply->writeBuffer(&_hidl_out_buffers, sizeof(_hidl_out_buffers), &_hidl__hidl_out_buffers_parent);
if (_hidl_err != ::android::OK) { goto _hidl_error; }
size_t _hidl__hidl_out_buffers_child;
_hidl_err = ::android::hardware::writeEmbeddedToParcel(
_hidl_out_buffers,
_hidl_reply,
_hidl__hidl_out_buffers_parent,
0 , &_hidl__hidl_out_buffers_child);
if (_hidl_err != ::android::OK) { goto _hidl_error; }
//关键代码====传输上边的回调函数的参数 const auto& tmpBuffers 的每个元素, 数据类型是 hidl_handle 类型
for (size_t _hidl_index_0 = 0; _hidl_index_0 < _hidl_out_buffers.size(); ++_hidl_index_0) {
// 关键函数 android::hardware::writeEmbeddedToParcel
_hidl_err = ::android::hardware::writeEmbeddedToParcel(
_hidl_out_buffers[_hidl_index_0],
_hidl_reply,
_hidl__hidl_out_buffers_child,
_hidl_index_0 * sizeof(::android::hardware::hidl_handle));
if (_hidl_err != ::android::OK) { goto _hidl_error; }
}
//...
if (_hidl_err != ::android::OK) { return; }
_hidl_cb(*_hidl_reply);
});
_hidl_ret.assertOk();
if (!_hidl_callbackCalled) {
LOG_ALWAYS_FATAL("allocate: _hidl_cb not called, but must be called once.");
}
return _hidl_err;
}
复制代码
android::hardware::writeEmbeddedToParcel
// system/libhidl/transport/HidlBinderSupport.cpp
status_t writeEmbeddedToParcel(const hidl_handle &handle,
Parcel *parcel, size_t parentHandle, size_t parentOffset) {
//此处调用了 hwbinder/Parcel.cpp 的writeEmbeddedNativeHandle 函数
status_t _hidl_err = parcel->writeEmbeddedNativeHandle(
handle.getNativeHandle(),
parentHandle,
parentOffset + hidl_handle::kOffsetOfNativeHandle);
return _hidl_err;
}
// system/libhwbinder/Parcel.cpp
status_t Parcel::writeEmbeddedNativeHandle(const native_handle_t *handle,
size_t parent_buffer_handle,
size_t parent_offset)
{
return writeNativeHandleNoDup(handle, true , parent_buffer_handle, parent_offset);
}
status_t Parcel::writeNativeHandleNoDup(const native_handle_t *handle,
bool embedded,
size_t parent_buffer_handle,
size_t parent_offset)
{
//...
struct binder_fd_array_object fd_array {
.hdr = { .type = BINDER_TYPE_FDA }, // 关键代码: BINDER_TYPE_FDA 类型,binder内核驱动代码对这个类型有专门的处理
.num_fds = static_cast<binder_size_t>(handle->numFds),
.parent = buffer_handle,
.parent_offset = offsetof(native_handle_t, data),
};
return writeObject(fd_array);
}
复制代码
之后的代码,见 binder驱动对 BINDER_TYPE_FDA 、BINDER_TYPE_FD 类型的处理
【2】App进程同 SurfaceFlinger 进程之间传递 GraphicBuffer 对象
GraphicBuffer 对象
服务端 requestBuffer 流程
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
status_t BnGraphicBufferProducer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case REQUEST_BUFFER: {
CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
int bufferIdx = data.readInt32();
sp<GraphicBuffer> buffer;
int result = requestBuffer(bufferIdx, &buffer);
reply->writeInt32(buffer != nullptr);
if (buffer != nullptr) {
reply->write(*buffer);// GraphicBuffer 对象回写========!!!!!!!!!!!!!!=====
}
reply->writeInt32(result);
return NO_ERROR;
}
//...
}
//...
}
复制代码
Parcel::write 写对象流程,
Parcel::write Parcel.h
// frameworks/native/libs/binder/include/binder/Parcel.h
template<typename T>
status_t Parcel::write(const Flattenable<T>& val) {// 对象需要继承 Flattenable
const FlattenableHelper<T> helper(val);
return write(helper);
}
复制代码
Parcel::write Parcel.cpp
// frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::write(const FlattenableHelperInterface& val)
{
status_t err;
// size if needed
const size_t len = val.getFlattenedSize();
// val.getFdCount(); 这个值为 GraphicBuffer.mTransportNumFds
// 从这个接口获取
// GrallocMapper::getTransportSize(buffer_handle_t bufferHandle, uint32_t* outNumFds, uint32_t* outNumInts)
const size_t fd_count = val.getFdCount();// 这个值为 GraphicBuffer.mTransportNumFds
//...........
// 调用对象的 flatten 写到缓存中
err = val.flatten(buf, len, fds, fd_count);
// fd_count 不为0,需要写 fd
for (size_t i=0 ; i<fd_count && err==NO_ERROR ; i++) {
err = this->writeDupFileDescriptor( fds[i] );
}
if (fd_count) {
delete [] fds;
}
return err;
}
复制代码
Parcel::writeDupFileDescriptor 写fd流程
// frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::writeDupFileDescriptor(int fd)
{
int dupFd;
if (status_t err = dupFileDescriptor(fd, &dupFd); err != OK) {
return err;
}
//=============!!!!!!!!!!!!!===========
status_t err = writeFileDescriptor(dupFd, true );
if (err != OK) {
close(dupFd);
}
return err;
}
status_t Parcel::writeFileDescriptor(int fd, bool takeOwnership) {
//........
#ifdef BINDER_WITH_KERNEL_IPC // frameworks/native/libs/binder/Android.bp 中定义了此宏 "-DBINDER_WITH_KERNEL_IPC",
flat_binder_object obj;
obj.hdr.type = BINDER_TYPE_FD;// 类型为 fd ,内核会自动创建fd
obj.flags = 0;
obj.binder = 0;
obj.handle = fd;
obj.cookie = takeOwnership ? 1 : 0;
return writeObject(obj, true);
#else // BINDER_WITH_KERNEL_IPC
LOG_ALWAYS_FATAL("Binder kernel driver disabled at build time");
(void)fd;
(void)takeOwnership;
return INVALID_OPERATION;
#endif // BINDER_WITH_KERNEL_IPC
}
复制代码
之后的代码,见 binder驱动对 BINDER_TYPE_FDA 、BINDER_TYPE_FD 类型的处理
【3】linux内核部分,binder驱动对 BINDER_TYPE_FDA 、BINDER_TYPE_FD 类型的处理
binder_transaction
// 这里使用的 3.8 的内核版本,逻辑较为清晰
// 更新的内核版本需要看 binder_apply_fd_fixups 函数部分
// https://android.googlesource.com/kernel/msm/+/refs/heads/android-msm-coral-4.14-android10/drivers/android/binder.c
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
//...
case BINDER_TYPE_FD: {
struct binder_fd_object *fp = to_binder_fd_object(hdr);
//数据类型为 BINDER_TYPE_FD 时,调用了 binder_translate_fd
int target_fd = binder_translate_fd(fp->fd, t, thread, in_reply_to);
if (target_fd < 0) {
return_error = BR_FAILED_REPLY;
return_error_param = target_fd;
return_error_line = __LINE__;
goto err_translate_failed;
}
fp->pad_binder = 0;
fp->fd = target_fd;
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, object_offset,
fp, sizeof(*fp));
} break;
case BINDER_TYPE_FDA: {
struct binder_object ptr_object;
binder_size_t parent_offset;
struct binder_fd_array_object *fda =
to_binder_fd_array_object(hdr);
size_t num_valid = (buffer_offset - off_start_offset) /
sizeof(binder_size_t);
struct binder_buffer_object *parent =
binder_validate_ptr(target_proc, t->buffer,
&ptr_object, fda->parent,
off_start_offset,
&parent_offset,
num_valid);
if (!parent) {
binder_user_error("%d:%d got transaction with invalid parent offset or type\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_parent;
}
if (!binder_validate_fixup(target_proc, t->buffer,
off_start_offset,
parent_offset,
fda->parent_offset,
last_fixup_obj_off,
last_fixup_min_off)) {
binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_parent;
}
//数据类型为 BINDER_TYPE_FDA 时,调用了 binder_translate_fd
ret = binder_translate_fd_array(fda, parent, t, thread, in_reply_to);
if (ret < 0) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
last_fixup_obj_off = parent_offset;
last_fixup_min_off =
fda->parent_offset + sizeof(u32) * fda->num_fds;
} break;
//...
}
复制代码
binder_translate_fd_array 函数中对每一个fd都调用了 binder_translate_fd 函数
binder_translate_fd
static int binder_translate_fd(int fd,
struct binder_transaction *t,
struct binder_thread *thread,
struct binder_transaction *in_reply_to)
{
struct binder_proc *proc = thread->proc;
struct binder_proc *target_proc = t->to_proc;
int target_fd;
struct file *file;
int ret;
bool target_allows_fd;
if (in_reply_to)
target_allows_fd = !!(in_reply_to->flags & TF_ACCEPT_FDS);
else
target_allows_fd = t->buffer->target_node->accept_fds;
if (!target_allows_fd) {
binder_user_error("%d:%d got %s with fd, %d, but target does not allow fds\n",
proc->pid, thread->pid,
in_reply_to ? "reply" : "transaction",
fd);
ret = -EPERM;
goto err_fd_not_accepted;
}
file = fget(fd);//从fd获取 file 对象
if (!file) {
binder_user_error("%d:%d got transaction with invalid fd, %d\n",
proc->pid, thread->pid, fd);
ret = -EBADF;
goto err_fget;
}
//se权限处理
ret = security_binder_transfer_file(proc->tsk, target_proc->tsk, file);
if (ret < 0) {
ret = -EPERM;
goto err_security;
}
//在目标进程中找到一个可用的fd
target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
if (target_fd < 0) {
ret = -ENOMEM;
goto err_get_unused_fd;
}
// 调用task_fd_install将 file对象 关联到目标进程中的fd
task_fd_install(target_proc, target_fd, file);
trace_binder_transaction_fd(t, fd, target_fd);
binder_debug(BINDER_DEBUG_TRANSACTION, " fd %d -> %d\n",
fd, target_fd);
return target_fd;
err_get_unused_fd:
err_security:
fput(file);
err_fget:
err_fd_not_accepted:
return ret;
}
复制代码
附、图形缓存的几个重要数据类型
1、App端 Surface 同 SurfaceFlinger 用于传递共享内存 fd 的对象 GraphicBuffer
GraphicBuffer
// frameworks/native/libs/ui/include/ui/GraphicBuffer.h
class GraphicBuffer
: public ANativeObjectBase<ANativeWindowBuffer, GraphicBuffer, RefBase>,
public Flattenable<GraphicBuffer>
{ //...
status_t flatten(void*& buffer, size_t& size, int*& fds, size_t& count) const;
status_t unflatten(void const*& buffer, size_t& size, int const*& fds, size_t& count);
//...
}
// frameworks/native/libs/ui/include/ui/ANativeObjectBase.h
template <typename NATIVE_TYPE, typename TYPE, typename REF,
typename NATIVE_BASE = android_native_base_t>
class ANativeObjectBase : public NATIVE_TYPE, public REF
{
//...
}
//转换后:
class ANativeObjectBase : public ANativeWindowBuffer, public RefBase
{
//...
}
复制代码
- GraphicBuffer 继承 ANativeWindowBuffer Flattenable
- Flattenable 两个关键函数 flatten unflatten,用于binder序列化时使用。
ANativeWindowBuffer
// /frameworks/native/libs/nativebase/include/nativebase/nativebase.h
// 图形Buffer的Size = stride * height * 每像素字节数
typedef struct ANativeWindowBuffer
{
...
int width; // 图形Buffer的宽度
int height; // 图形Buffer的高度
int stride; // 图形Buffer的步长,为了处理对齐问题,与width可能不同
int format; // 图形Buffer的像素格式
const native_handle_t* handle; // 指向一块图形Buffer
uint64_t usage; // 图形Buffer的使用规则(gralloc会分配不同属性的图形Buffer)
...
} ANativeWindowBuffer_t;
复制代码
native_handle_t
// system/core/libcutils/include/cutils/native_handle.h
typedef struct native_handle
{
int version;
// //data[0]中的文件描述符个数
int numFds;
// //&data[numFds]中int的个数
int numInts;
int data[0];
} native_handle_t;
复制代码
buffer_handle_t 同 native_handle_t
// system/core/libcutils/include/cutils/native_handle.h
typedef const native_handle_t* buffer_handle_t;
复制代码
2、hidl接口 进程间传递 fd 使用的数据类型 (HWbinder 传递 fd 的对象)
- hidl_handle 用于 SurfaceFlinger 同 IAllocator HIDL接口的服务之间传递 共享内存fd
- 高通平台上,这个 HIDL 服务端对应的进程是 vendor.qti.hardware.display.allocator-service
hidl_handle
struct hidl_handle {
hidl_handle();
~hidl_handle();
hidl_handle(const native_handle_t *handle);
// copy constructor.
hidl_handle(const hidl_handle &other);
// move constructor.
hidl_handle(hidl_handle &&other) noexcept;
// assignment operators
hidl_handle &operator=(const hidl_handle &other);
hidl_handle &operator=(const native_handle_t *native_handle);
hidl_handle &operator=(hidl_handle &&other) noexcept;
void setTo(native_handle_t* handle, bool shouldOwn = false);
const native_handle_t* operator->() const;
// implicit conversion to const native_handle_t*
operator const native_handle_t *() const;
// explicit conversion
const native_handle_t *getNativeHandle() const;
// offsetof(hidl_handle, mHandle) exposed since mHandle is private.
static const size_t kOffsetOfNativeHandle;
private:
void freeHandle();
// 核心数据 native_handle_t mHandle;
details::hidl_pointer<const native_handle_t> mHandle;
bool mOwnsHandle;
uint8_t mPad[7];
};
复制代码
总结:
- SurfaceFlinger进程 和 IAllocator服务进程之间通过 hidl_handle 类型的数据传递 图形buffer共享内存的fd
- 数据传输中对 hidl_handle 类型数据特化处理,并把binder数据类型设置为 BINDER_TYPE_FDA
- binder内核对 BINDER_TYPE_FDA 类型数据特化处理
- 同时在 IAllocator.allocate 的回调函数中调用 IMapper.importBuffer 把内存映射到当前进程
- App进程 同 SurfaceFlinger进程之间使用 GraphicBuffer 对象传递 图形buffer共享内存的fd
- 数据传输中对 GraphicBuffer 中的 native_handle_t 数据特化处理,并把binder数据类型设置为 BINDER_TYPE_FD
- binder内核对 BINDER_TYPE_FD 类型数据特化处理
- 同时在从binder读取数据创建GraphicBuffer对象时,调用 GraphicBuffer.unflatten,内部调用 IMapper.importBuffer 把内存映射到当前进程
后记
Android12 之后使用 BLASTBufferQueue ,虽然有些变化,但是理解了 GraphicBuffer 和 hidl_handle 传递 fd 的过程,这些都游刃有余
Android 的 aidl接口层、hidl接口层、binder bp 接口层 都隐藏了很多关键代码,导致看代码时,感觉总是云里雾里
- 像hidl接口,生成的大量代码,在out/soong目录下,仅仅看源码树目录下的代码根本找不到好吧。
以上就是清楚详解Android 进程间图传递图形buffer原理的详细内容,更多关于Android 进程间图传递图形buffer的资料请关注编程网其它相关文章!