Added GENLOCK.

This commit is contained in:
tytung 2012-05-01 14:50:48 +08:00
parent c6de4393cf
commit 57f5775c0b
7 changed files with 1178 additions and 0 deletions

161
Documentation/genlock.txt Normal file
View File

@ -0,0 +1,161 @@
Introduction
'genlock' is an in-kernel API and optional userspace interface for a generic
cross-process locking mechanism. The API is designed for situations where
multiple user space processes and/or kernel drivers need to coordinate access
to a shared resource, such as a graphics buffer. The API was designed with
graphics buffers in mind, but is sufficiently generic to allow it to be
independently used with different types of resources. The chief advantage
of genlock over other cross-process locking mechanisms is that the resources
can be accessed by both userspace and kernel drivers which allows resources
to be locked or unlocked by asynchronous events in the kernel without the
intervention of user space.
As an example, consider a graphics buffer that is shared between a rendering
application and a compositing window manager. The application renders into a
buffer. That buffer is reused by the compositing window manager as a texture.
To avoid corruption, access to the buffer needs to be restricted so that one
is not drawing on the surface while the other is reading. Locks can be
explicitly added between the rendering stages in the processes, but explicit
locks require that the application wait for rendering and purposely release the
lock. An implicit release triggered by an asynchronous event from the GPU
kernel driver, however, will let execution continue without requiring the
intercession of user space.
SW Goals
The genlock API implements exclusive write locks and shared read locks meaning
that there can only be one writer at a time, but multiple readers. Processes
that are unable to acquire a lock can be optionally blocked until the resource
becomes available.
Locks are shared between processes. Each process will have its own private
instance for a lock known as a handle. Handles can be shared between user
space and kernel space to allow a kernel driver to unlock or lock a buffer
on behalf of a user process.
Kernel API
Access to the genlock API can either be via the in-kernel API or via an
optional character device (/dev/genlock). The character device is primarily
to be used for legacy resource sharing APIs that cannot be easily changed.
New resource sharing APIs from this point should implement a scheme specific
wrapper for locking.
To create or attach to an existing lock, a process or kernel driver must first
create a handle. Each handle is linked to a single lock at any time. An entityi
may have multiple handles, each associated with a different lock. Once a handle
has been created, the owner may create a new lock or attach an existing lock
that has been exported from a different handle.
Once the handle has a lock attached, the owning process may attempt to lock the
buffer for read or write. Write locks are exclusive, meaning that only one
process may acquire it at any given time. Read locks are shared, meaning that
multiple readers can hold the lock at the same time. Attempts to acquire a read
lock with a writer active or a write lock with one or more readers or writers
active will typically cause the process to block until the lock is acquired.
When the lock is released, all waiting processes will be woken up. Ownership
of the lock is reference counted, meaning that any one owner can "lock"
multiple times. The lock will only be released from the owner when all the
references to the lock are released via unlock.
The owner of a write lock may atomically convert the lock into a read lock
(which will wake up other processes waiting for a read lock) without first
releasing the lock. The owner would simply issue a new request for a read lock.
However, the owner of a read lock cannot convert it into a write lock in the
same manner. To switch from a read lock to a write lock, the owner must
release the lock and then try to reacquire it.
These are the in-kernel API calls that drivers can use to create and
manipulate handles and locks. Handles can either be created and managed
completely inside of kernel space, or shared from user space via a file
descriptor.
* struct genlock_handle *genlock_get_handle(void)
Create a new handle.
* struct genlock_handle * genlock_get_handle_fd(int fd)
Given a valid file descriptor, return the handle associated with that
descriptor.
* void genlock_put_handle(struct genlock_handle *)
Release a handle.
* struct genlock * genlock_create_lock(struct genlock_handle *)
Create a new lock and attach it to the handle.
* struct genlock * genlock_attach_lock(struct genlock_handle *handle, int fd)
Given a valid file descriptor, get the lock associated with it and attach it to
the handle.
* void genlock_release_lock(struct genlock_handle *)
Release a lock attached to a handle.
* int genlock_lock(struct genlock_handle *, int op, int flags, u32 timeout)
Lock or unlock the lock attached to the handle. A zero timeout value will
be treated just like if the GENOCK_NOBLOCK flag is passed; if the lock
can be acquired without blocking then do so otherwise return -EAGAIN.
Function returns -ETIMEDOUT if the timeout expired or 0 if the lock was
acquired.
* int genlock_wait(struct genloc_handle *, u32 timeout)
Wait for a lock held by the handle to go to the unlocked state. A non-zero
timeout value must be passed. Returns -ETIMEDOUT if the timeout expired or
0 if the lock is in an unlocked state.
Character Device
Opening an instance to the /dev/genlock character device will automatically
create a new handle. All ioctl functions with the exception of NEW and
RELEASE use the following parameter structure:
struct genlock_lock {
int fd; /* Returned by EXPORT, used by ATTACH */
int op; /* Used by LOCK */
int flags; /* used by LOCK */
u32 timeout; /* Used by LOCK and WAIT */
}
*GENLOCK_IOC_NEW
Create a new lock and attaches it to the handle. Returns -EINVAL if the handle
already has a lock attached (use GENLOCK_IOC_RELEASE to remove it). Returns
-ENOMEM if the memory for the lock can not be allocated. No data is passed
from the user for this ioctl.
*GENLOCK_IOC_EXPORT
Export the currently attached lock to a file descriptor. The file descriptor
is returned in genlock_lock.fd.
*GENLOCK_IOC_ATTACH
Attach an exported lock file descriptor to the current handle. Return -EINVAL
if the handle already has a lock attached (use GENLOCK_IOC_RELEASE to remove
it). Pass the file descriptor in genlock_lock.fd.
*GENLOCK_IOC_LOCK
Lock or unlock the attached lock. Pass the desired operation in
genlock_lock.op:
* GENLOCK_WRLOCK - write lock
* GENLOCK_RDLOCK - read lock
* GENLOCK_UNLOCK - unlock an existing lock
Pass flags in genlock_lock.flags:
* GENLOCK_NOBLOCK - Do not block if the lock is already taken
Pass a timeout value in milliseconds in genlock_lock.timeout.
genlock_lock.flags and genlock_lock.timeout are not used for UNLOCK.
Returns -EINVAL if no lock is attached, -EAGAIN if the lock is taken and
NOBLOCK is specified or if the timeout value is zero, -ETIMEDOUT if the timeout
expires or 0 if the lock was successful.
* GENLOCK_IOC_WAIT
Wait for the lock attached to the handle to be released (i.e. goes to unlock).
This is mainly used for a thread that needs to wait for a peer to release a
lock on the same shared handle. A non-zero timeout value in milliseconds is
passed in genlock_lock.timeout. Returns 0 when the lock has been released,
-EINVAL if a zero timeout is passed, or -ETIMEDOUT if the timeout expires.
* GENLOCK_IOC_RELEASE
Use this to release an existing lock. This is useful if you wish to attach a
different lock to the same handle. You do not need to call this under normal
circumstances; when the handle is closed the reference to the lock is released.
No data is passed from the user for this ioctl.

View File

@ -151,4 +151,18 @@ config SYS_HYPERVISOR
bool
default n
config GENLOCK
bool "Enable a generic cross-process locking mechanism"
depends on ANON_INODES
help
Enable a generic cross-process locking API to provide protection
for shared memory objects such as graphics buffers.
config GENLOCK_MISCDEVICE
bool "Enable a misc-device for userspace to access the genlock engine"
depends on GENLOCK
help
Create a miscdevice for the purposes of allowing userspace to create
and interact with locks created using genlock.
endmenu

View File

@ -8,6 +8,7 @@ obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
obj-$(CONFIG_GENLOCK) += genlock.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o

640
drivers/base/genlock.c Normal file
View File

@ -0,0 +1,640 @@
/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/fb.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/list.h>
#include <linux/file.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/wait.h>
#include <linux/uaccess.h>
#include <linux/anon_inodes.h>
#include <linux/miscdevice.h>
#include <linux/genlock.h>
#include <linux/interrupt.h> /* for in_interrupt() */
/* Lock states - can either be unlocked, held as an exclusive write lock or a
* shared read lock
*/
#define _UNLOCKED 0
#define _RDLOCK GENLOCK_RDLOCK
#define _WRLOCK GENLOCK_WRLOCK
struct genlock {
struct list_head active; /* List of handles holding lock */
spinlock_t lock; /* Spinlock to protect the lock internals */
wait_queue_head_t queue; /* Holding pen for processes pending lock */
struct file *file; /* File structure for exported lock */
int state; /* Current state of the lock */
struct kref refcount;
};
struct genlock_handle {
struct genlock *lock; /* Lock currently attached to the handle */
struct list_head entry; /* List node for attaching to a lock */
struct file *file; /* File structure associated with handle */
int active; /* Number of times the active lock has been
taken */
};
static void genlock_destroy(struct kref *kref)
{
struct genlock *lock = container_of(kref, struct genlock,
refcount);
kfree(lock);
}
/*
* Release the genlock object. Called when all the references to
* the genlock file descriptor are released
*/
static int genlock_release(struct inode *inodep, struct file *file)
{
return 0;
}
static const struct file_operations genlock_fops = {
.release = genlock_release,
};
/**
* genlock_create_lock - Create a new lock
* @handle - genlock handle to attach the lock to
*
* Returns: a pointer to the genlock
*/
struct genlock *genlock_create_lock(struct genlock_handle *handle)
{
struct genlock *lock;
if (handle->lock != NULL)
return ERR_PTR(-EINVAL);
lock = kzalloc(sizeof(*lock), GFP_KERNEL);
if (lock == NULL)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&lock->active);
init_waitqueue_head(&lock->queue);
spin_lock_init(&lock->lock);
lock->state = _UNLOCKED;
/*
* Create an anonyonmous inode for the object that can exported to
* other processes
*/
lock->file = anon_inode_getfile("genlock", &genlock_fops,
lock, O_RDWR);
/* Attach the new lock to the handle */
handle->lock = lock;
kref_init(&lock->refcount);
return lock;
}
EXPORT_SYMBOL(genlock_create_lock);
/*
* Get a file descriptor reference to a lock suitable for sharing with
* other processes
*/
static int genlock_get_fd(struct genlock *lock)
{
int ret;
if (!lock->file)
return -EINVAL;
ret = get_unused_fd_flags(0);
if (ret < 0)
return ret;
fd_install(ret, lock->file);
return ret;
}
/**
* genlock_attach_lock - Attach an existing lock to a handle
* @handle - Pointer to a genlock handle to attach the lock to
* @fd - file descriptor for the exported lock
*
* Returns: A pointer to the attached lock structure
*/
struct genlock *genlock_attach_lock(struct genlock_handle *handle, int fd)
{
struct file *file;
struct genlock *lock;
if (handle->lock != NULL)
return ERR_PTR(-EINVAL);
file = fget(fd);
if (file == NULL)
return ERR_PTR(-EBADF);
lock = file->private_data;
fput(file);
if (lock == NULL)
return ERR_PTR(-EINVAL);
handle->lock = lock;
kref_get(&lock->refcount);
return lock;
}
EXPORT_SYMBOL(genlock_attach_lock);
/* Helper function that returns 1 if the specified handle holds the lock */
static int handle_has_lock(struct genlock *lock, struct genlock_handle *handle)
{
struct genlock_handle *h;
list_for_each_entry(h, &lock->active, entry) {
if (h == handle)
return 1;
}
return 0;
}
/* If the lock just became available, signal the next entity waiting for it */
static void _genlock_signal(struct genlock *lock)
{
if (list_empty(&lock->active)) {
/* If the list is empty, then the lock is free */
lock->state = _UNLOCKED;
/* Wake up the first process sitting in the queue */
wake_up(&lock->queue);
}
}
/* Attempt to release the handle's ownership of the lock */
static int _genlock_unlock(struct genlock *lock, struct genlock_handle *handle)
{
int ret = -EINVAL;
unsigned long irqflags;
spin_lock_irqsave(&lock->lock, irqflags);
if (lock->state == _UNLOCKED)
goto done;
/* Make sure this handle is an owner of the lock */
if (!handle_has_lock(lock, handle))
goto done;
/* If the handle holds no more references to the lock then
release it (maybe) */
if (--handle->active == 0) {
list_del(&handle->entry);
_genlock_signal(lock);
}
ret = 0;
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/* Attempt to acquire the lock for the handle */
static int _genlock_lock(struct genlock *lock, struct genlock_handle *handle,
int op, int flags, uint32_t timeout)
{
unsigned long irqflags;
int ret = 0;
unsigned int ticks = msecs_to_jiffies(timeout);
spin_lock_irqsave(&lock->lock, irqflags);
/* Sanity check - no blocking locks in a debug context. Even if it
* succeed to not block, the mere idea is too dangerous to continue
*/
if (in_interrupt() && !(flags & GENLOCK_NOBLOCK))
BUG();
/* Fast path - the lock is unlocked, so go do the needful */
if (lock->state == _UNLOCKED)
goto dolock;
if (handle_has_lock(lock, handle)) {
/*
* If the handle already holds the lock and the type matches,
* then just increment the active pointer. This allows the
* handle to do recursive locks
*/
if (lock->state == op) {
handle->active++;
goto done;
}
/*
* If the handle holds a write lock then the owner can switch
* to a read lock if they want. Do the transition atomically
* then wake up any pending waiters in case they want a read
* lock too.
*/
if (op == _RDLOCK && handle->active == 1) {
lock->state = _RDLOCK;
wake_up(&lock->queue);
goto done;
}
/*
* Otherwise the user tried to turn a read into a write, and we
* don't allow that.
*/
ret = -EINVAL;
goto done;
}
/*
* If we request a read and the lock is held by a read, then go
* ahead and share the lock
*/
if (op == GENLOCK_RDLOCK && lock->state == _RDLOCK)
goto dolock;
/* Treat timeout 0 just like a NOBLOCK flag and return if the
lock cannot be aquired without blocking */
if (flags & GENLOCK_NOBLOCK || timeout == 0) {
ret = -EAGAIN;
goto done;
}
/* Wait while the lock remains in an incompatible state */
while (lock->state != _UNLOCKED) {
unsigned int elapsed;
spin_unlock_irqrestore(&lock->lock, irqflags);
elapsed = wait_event_interruptible_timeout(lock->queue,
lock->state == _UNLOCKED, ticks);
spin_lock_irqsave(&lock->lock, irqflags);
if (elapsed <= 0) {
ret = (elapsed < 0) ? elapsed : -ETIMEDOUT;
goto done;
}
ticks = elapsed;
}
dolock:
/* We can now get the lock, add ourselves to the list of owners */
list_add_tail(&handle->entry, &lock->active);
lock->state = op;
handle->active = 1;
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/**
* genlock_lock - Acquire or release a lock
* @handle - pointer to the genlock handle that is requesting the lock
* @op - the operation to perform (RDLOCK, WRLOCK, UNLOCK)
* @flags - flags to control the operation
* @timeout - optional timeout to wait for the lock to come free
*
* Returns: 0 on success or error code on failure
*/
int genlock_lock(struct genlock_handle *handle, int op, int flags,
uint32_t timeout)
{
struct genlock *lock = handle->lock;
int ret = 0;
if (lock == NULL)
return -EINVAL;
switch (op) {
case GENLOCK_UNLOCK:
ret = _genlock_unlock(lock, handle);
break;
case GENLOCK_RDLOCK:
case GENLOCK_WRLOCK:
ret = _genlock_lock(lock, handle, op, flags, timeout);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
EXPORT_SYMBOL(genlock_lock);
/**
* genlock_wait - Wait for the lock to be released
* @handle - pointer to the genlock handle that is waiting for the lock
* @timeout - optional timeout to wait for the lock to get released
*/
int genlock_wait(struct genlock_handle *handle, uint32_t timeout)
{
struct genlock *lock = handle->lock;
unsigned long irqflags;
int ret = 0;
unsigned int ticks = msecs_to_jiffies(timeout);
if (lock == NULL)
return -EINVAL;
spin_lock_irqsave(&lock->lock, irqflags);
/*
* if timeout is 0 and the lock is already unlocked, then success
* otherwise return -EAGAIN
*/
if (timeout == 0) {
ret = (lock->state == _UNLOCKED) ? 0 : -EAGAIN;
goto done;
}
while (lock->state != _UNLOCKED) {
unsigned int elapsed;
spin_unlock_irqrestore(&lock->lock, irqflags);
elapsed = wait_event_interruptible_timeout(lock->queue,
lock->state == _UNLOCKED, ticks);
spin_lock_irqsave(&lock->lock, irqflags);
if (elapsed <= 0) {
ret = (elapsed < 0) ? elapsed : -ETIMEDOUT;
break;
}
ticks = elapsed;
}
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/**
* genlock_release_lock - Release a lock attached to a handle
* @handle - Pointer to the handle holding the lock
*/
void genlock_release_lock(struct genlock_handle *handle)
{
unsigned long flags;
if (handle == NULL || handle->lock == NULL)
return;
spin_lock_irqsave(&handle->lock->lock, flags);
/* If the handle is holding the lock, then force it closed */
if (handle_has_lock(handle->lock, handle)) {
list_del(&handle->entry);
_genlock_signal(handle->lock);
}
spin_unlock_irqrestore(&handle->lock->lock, flags);
kref_put(&handle->lock->refcount, genlock_destroy);
handle->lock = NULL;
handle->active = 0;
}
EXPORT_SYMBOL(genlock_release_lock);
/*
* Release function called when all references to a handle are released
*/
static int genlock_handle_release(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = file->private_data;
genlock_release_lock(handle);
kfree(handle);
return 0;
}
static const struct file_operations genlock_handle_fops = {
.release = genlock_handle_release
};
/*
* Allocate a new genlock handle
*/
static struct genlock_handle *_genlock_get_handle(void)
{
struct genlock_handle *handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (handle == NULL)
return ERR_PTR(-ENOMEM);
return handle;
}
/**
* genlock_get_handle - Create a new genlock handle
*
* Returns: A pointer to a new genlock handle
*/
struct genlock_handle *genlock_get_handle(void)
{
struct genlock_handle *handle = _genlock_get_handle();
if (IS_ERR(handle))
return handle;
handle->file = anon_inode_getfile("genlock-handle",
&genlock_handle_fops, handle, O_RDWR);
return handle;
}
EXPORT_SYMBOL(genlock_get_handle);
/**
* genlock_put_handle - release a reference to a genlock handle
* @handle - A pointer to the handle to release
*/
void genlock_put_handle(struct genlock_handle *handle)
{
if (handle)
fput(handle->file);
}
EXPORT_SYMBOL(genlock_put_handle);
/**
* genlock_get_handle_fd - Get a handle reference from a file descriptor
* @fd - The file descriptor for a genlock handle
*/
struct genlock_handle *genlock_get_handle_fd(int fd)
{
struct file *file = fget(fd);
if (file == NULL)
return ERR_PTR(-EINVAL);
return file->private_data;
}
EXPORT_SYMBOL(genlock_get_handle_fd);
#ifdef CONFIG_GENLOCK_MISCDEVICE
static long genlock_dev_ioctl(struct file *filep, unsigned int cmd,
unsigned long arg)
{
struct genlock_lock param;
struct genlock_handle *handle = filep->private_data;
struct genlock *lock;
int ret;
switch (cmd) {
case GENLOCK_IOC_NEW: {
lock = genlock_create_lock(handle);
if (IS_ERR(lock))
return PTR_ERR(lock);
return 0;
}
case GENLOCK_IOC_EXPORT: {
if (handle->lock == NULL)
return -EINVAL;
ret = genlock_get_fd(handle->lock);
if (ret < 0)
return ret;
param.fd = ret;
if (copy_to_user((void __user *) arg, &param,
sizeof(param)))
return -EFAULT;
return 0;
}
case GENLOCK_IOC_ATTACH: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
lock = genlock_attach_lock(handle, param.fd);
if (IS_ERR(lock))
return PTR_ERR(lock);
return 0;
}
case GENLOCK_IOC_LOCK: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
return genlock_lock(handle, param.op, param.flags,
param.timeout);
}
case GENLOCK_IOC_WAIT: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
return genlock_wait(handle, param.timeout);
}
case GENLOCK_IOC_RELEASE: {
genlock_release_lock(handle);
return 0;
}
default:
return -EINVAL;
}
}
static int genlock_dev_release(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = file->private_data;
genlock_release_lock(handle);
kfree(handle);
return 0;
}
static int genlock_dev_open(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = _genlock_get_handle();
if (IS_ERR(handle))
return PTR_ERR(handle);
handle->file = file;
file->private_data = handle;
return 0;
}
static const struct file_operations genlock_dev_fops = {
.open = genlock_dev_open,
.release = genlock_dev_release,
.unlocked_ioctl = genlock_dev_ioctl,
};
static struct miscdevice genlock_dev;
static int genlock_dev_init(void)
{
genlock_dev.minor = MISC_DYNAMIC_MINOR;
genlock_dev.name = "genlock";
genlock_dev.fops = &genlock_dev_fops;
genlock_dev.parent = NULL;
return misc_register(&genlock_dev);
}
static void genlock_dev_close(void)
{
misc_deregister(&genlock_dev);
}
module_init(genlock_dev_init);
module_exit(genlock_dev_close);
#endif

View File

@ -1,3 +1,67 @@
#ifdef CONFIG_MSM_KGSL
/*
* Basic general purpose allocator for managing special purpose memory
* not managed by the regular kmalloc/kfree interface.
* Uses for this includes on-device special memory, uncached memory
* etc.
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#ifndef __GENALLOC_H__
#define __GENALLOC_H__
struct gen_pool;
struct gen_pool *__must_check gen_pool_create(unsigned order, int nid);
void gen_pool_destroy(struct gen_pool *pool);
unsigned long __must_check
gen_pool_alloc_aligned(struct gen_pool *pool, size_t size,
unsigned alignment_order);
/**
* gen_pool_alloc() - allocate special memory from the pool
* @pool: Pool to allocate from.
* @size: Number of bytes to allocate from the pool.
*
* Allocate the requested number of bytes from the specified pool.
* Uses a first-fit algorithm.
*/
static inline unsigned long __must_check
gen_pool_alloc(struct gen_pool *pool, size_t size)
{
return gen_pool_alloc_aligned(pool, size, 0);
}
void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size);
extern phys_addr_t gen_pool_virt_to_phys(struct gen_pool *pool, unsigned long);
extern int gen_pool_add_virt(struct gen_pool *, unsigned long, phys_addr_t,
size_t, int);
/**
* gen_pool_add - add a new chunk of special memory to the pool
* @pool: pool to add new memory chunk to
* @addr: starting address of memory chunk to add to pool
* @size: size in bytes of the memory chunk to add to pool
* @nid: node id of the node the chunk structure and bitmap should be
* allocated on, or -1
*
* Add a new chunk of special memory to the specified pool.
*
* Returns 0 on success or a -ve errno on failure.
*/
static inline int __must_check gen_pool_add(struct gen_pool *pool, unsigned long addr,
size_t size, int nid)
{
return gen_pool_add_virt(pool, addr, -1, size, nid);
}
#endif /* __GENALLOC_H__ */
#else
/*
* Basic general purpose allocator for managing special purpose memory
* not managed by the regular kmalloc/kfree interface.
@ -34,3 +98,5 @@ extern int gen_pool_add(struct gen_pool *, unsigned long, size_t, int);
extern void gen_pool_destroy(struct gen_pool *);
extern unsigned long gen_pool_alloc(struct gen_pool *, size_t);
extern void gen_pool_free(struct gen_pool *, unsigned long, size_t);
#endif

45
include/linux/genlock.h Normal file
View File

@ -0,0 +1,45 @@
#ifndef _GENLOCK_H_
#define _GENLOCK_H_
#ifdef __KERNEL__
struct genlock;
struct genlock_handle;
struct genlock_handle *genlock_get_handle(void);
struct genlock_handle *genlock_get_handle_fd(int fd);
void genlock_put_handle(struct genlock_handle *handle);
struct genlock *genlock_create_lock(struct genlock_handle *);
struct genlock *genlock_attach_lock(struct genlock_handle *, int fd);
int genlock_wait(struct genlock_handle *handle, u32 timeout);
void genlock_release_lock(struct genlock_handle *);
int genlock_lock(struct genlock_handle *handle, int op, int flags,
u32 timeout);
#endif
#define GENLOCK_UNLOCK 0
#define GENLOCK_WRLOCK 1
#define GENLOCK_RDLOCK 2
#define GENLOCK_NOBLOCK (1 << 0)
struct genlock_lock {
int fd;
int op;
int flags;
int timeout;
};
#define GENLOCK_IOC_MAGIC 'G'
#define GENLOCK_IOC_NEW _IO(GENLOCK_IOC_MAGIC, 0)
#define GENLOCK_IOC_EXPORT _IOR(GENLOCK_IOC_MAGIC, 1, \
struct genlock_lock)
#define GENLOCK_IOC_ATTACH _IOW(GENLOCK_IOC_MAGIC, 2, \
struct genlock_lock)
#define GENLOCK_IOC_LOCK _IOW(GENLOCK_IOC_MAGIC, 3, \
struct genlock_lock)
#define GENLOCK_IOC_RELEASE _IO(GENLOCK_IOC_MAGIC, 4)
#define GENLOCK_IOC_WAIT _IOW(GENLOCK_IOC_MAGIC, 5, \
struct genlock_lock)
#endif

View File

@ -1,3 +1,253 @@
#ifdef CONFIG_MSM_KGSL
/*
* Basic general purpose allocator for managing special purpose memory
* not managed by the regular kmalloc/kfree interface.
* Uses for this includes on-device special memory, uncached memory
* etc.
*
* Copyright 2005 (C) Jes Sorensen <jes@trained-monkey.org>
*
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/bitmap.h>
#include <linux/genalloc.h>
/* General purpose special memory pool descriptor. */
struct gen_pool {
rwlock_t lock; /* protects chunks list */
struct list_head chunks; /* list of chunks in this pool */
unsigned order; /* minimum allocation order */
};
/* General purpose special memory pool chunk descriptor. */
struct gen_pool_chunk {
spinlock_t lock; /* protects bits */
struct list_head next_chunk; /* next chunk in pool */
phys_addr_t phys_addr; /* physical starting address of memory chunk */
unsigned long start; /* start of memory chunk */
unsigned long size; /* number of bits */
unsigned long bits[0]; /* bitmap for allocating memory chunk */
};
/**
* gen_pool_create() - create a new special memory pool
* @order: Log base 2 of number of bytes each bitmap bit
* represents.
* @nid: Node id of the node the pool structure should be allocated
* on, or -1. This will be also used for other allocations.
*
* Create a new special memory pool that can be used to manage special purpose
* memory not managed by the regular kmalloc/kfree interface.
*/
struct gen_pool *__must_check gen_pool_create(unsigned order, int nid)
{
struct gen_pool *pool;
if (WARN_ON(order >= BITS_PER_LONG))
return NULL;
pool = kmalloc_node(sizeof *pool, GFP_KERNEL, nid);
if (pool) {
rwlock_init(&pool->lock);
INIT_LIST_HEAD(&pool->chunks);
pool->order = order;
}
return pool;
}
EXPORT_SYMBOL(gen_pool_create);
/**
* gen_pool_add_virt - add a new chunk of special memory to the pool
* @pool: pool to add new memory chunk to
* @virt: virtual starting address of memory chunk to add to pool
* @phys: physical starting address of memory chunk to add to pool
* @size: size in bytes of the memory chunk to add to pool
* @nid: node id of the node the chunk structure and bitmap should be
* allocated on, or -1
*
* Add a new chunk of special memory to the specified pool.
*
* Returns 0 on success or a -ve errno on failure.
*/
int __must_check gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phys,
size_t size, int nid)
{
struct gen_pool_chunk *chunk;
size_t nbytes;
if (WARN_ON(!virt || virt + size < virt ||
(virt & ((1 << pool->order) - 1))))
return -EINVAL;
size = size >> pool->order;
if (WARN_ON(!size))
return -EINVAL;
nbytes = sizeof *chunk + BITS_TO_LONGS(size) * sizeof *chunk->bits;
chunk = kzalloc_node(nbytes, GFP_KERNEL, nid);
if (!chunk)
return -ENOMEM;
spin_lock_init(&chunk->lock);
chunk->phys_addr = phys;
chunk->start = virt >> pool->order;
chunk->size = size;
write_lock(&pool->lock);
list_add(&chunk->next_chunk, &pool->chunks);
write_unlock(&pool->lock);
return 0;
}
EXPORT_SYMBOL(gen_pool_add_virt);
/**
* gen_pool_virt_to_phys - return the physical address of memory
* @pool: pool to allocate from
* @addr: starting address of memory
*
* Returns the physical address on success, or -1 on error.
*/
phys_addr_t gen_pool_virt_to_phys(struct gen_pool *pool, unsigned long addr)
{
struct list_head *_chunk;
struct gen_pool_chunk *chunk;
read_lock(&pool->lock);
list_for_each(_chunk, &pool->chunks) {
chunk = list_entry(_chunk, struct gen_pool_chunk, next_chunk);
if (addr >= chunk->start &&
addr < (chunk->start + chunk->size))
return chunk->phys_addr + addr - chunk->start;
}
read_unlock(&pool->lock);
return -1;
}
EXPORT_SYMBOL(gen_pool_virt_to_phys);
/**
* gen_pool_destroy() - destroy a special memory pool
* @pool: Pool to destroy.
*
* Destroy the specified special memory pool. Verifies that there are no
* outstanding allocations.
*/
void gen_pool_destroy(struct gen_pool *pool)
{
struct gen_pool_chunk *chunk;
int bit;
while (!list_empty(&pool->chunks)) {
chunk = list_entry(pool->chunks.next, struct gen_pool_chunk,
next_chunk);
list_del(&chunk->next_chunk);
bit = find_next_bit(chunk->bits, chunk->size, 0);
BUG_ON(bit < chunk->size);
kfree(chunk);
}
kfree(pool);
}
EXPORT_SYMBOL(gen_pool_destroy);
/**
* gen_pool_alloc_aligned() - allocate special memory from the pool
* @pool: Pool to allocate from.
* @size: Number of bytes to allocate from the pool.
* @alignment_order: Order the allocated space should be
* aligned to (eg. 20 means allocated space
* must be aligned to 1MiB).
*
* Allocate the requested number of bytes from the specified pool.
* Uses a first-fit algorithm.
*/
unsigned long __must_check
gen_pool_alloc_aligned(struct gen_pool *pool, size_t size,
unsigned alignment_order)
{
unsigned long addr, align_mask = 0, flags, start;
struct gen_pool_chunk *chunk;
if (size == 0)
return 0;
if (alignment_order > pool->order)
align_mask = (1 << (alignment_order - pool->order)) - 1;
size = (size + (1UL << pool->order) - 1) >> pool->order;
read_lock(&pool->lock);
list_for_each_entry(chunk, &pool->chunks, next_chunk) {
if (chunk->size < size)
continue;
spin_lock_irqsave(&chunk->lock, flags);
start = bitmap_find_next_zero_area_off(chunk->bits, chunk->size,
0, size, align_mask,
chunk->start);
if (start >= chunk->size) {
spin_unlock_irqrestore(&chunk->lock, flags);
continue;
}
bitmap_set(chunk->bits, start, size);
spin_unlock_irqrestore(&chunk->lock, flags);
addr = (chunk->start + start) << pool->order;
goto done;
}
addr = 0;
done:
read_unlock(&pool->lock);
return addr;
}
EXPORT_SYMBOL(gen_pool_alloc_aligned);
/**
* gen_pool_free() - free allocated special memory back to the pool
* @pool: Pool to free to.
* @addr: Starting address of memory to free back to pool.
* @size: Size in bytes of memory to free.
*
* Free previously allocated special memory back to the specified pool.
*/
void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size)
{
struct gen_pool_chunk *chunk;
unsigned long flags;
if (!size)
return;
addr = addr >> pool->order;
size = (size + (1UL << pool->order) - 1) >> pool->order;
BUG_ON(addr + size < addr);
read_lock(&pool->lock);
list_for_each_entry(chunk, &pool->chunks, next_chunk)
if (addr >= chunk->start &&
addr + size <= chunk->start + chunk->size) {
spin_lock_irqsave(&chunk->lock, flags);
bitmap_clear(chunk->bits, addr - chunk->start, size);
spin_unlock_irqrestore(&chunk->lock, flags);
goto done;
}
BUG_ON(1);
done:
read_unlock(&pool->lock);
}
EXPORT_SYMBOL(gen_pool_free);
#else
/*
* Basic general purpose allocator for managing special purpose memory
* not managed by the regular kmalloc/kfree interface.
@ -194,3 +444,4 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size)
read_unlock(&pool->lock);
}
EXPORT_SYMBOL(gen_pool_free);
#endif