Compare commits

..

8 Commits

Author SHA1 Message Date
milaq
86a9e541f1 config: enable zram support 2012-07-06 18:17:55 +02:00
tytung
f340fde792 drivers: staging: zram: added ZRAM support: /dev/zramX (X = 0, 1, ...). 2012-07-06 18:15:59 +02:00
Marc Alexander
19f657ae23 Allow high current charging on china chargers 2012-07-06 18:15:42 +02:00
Michael
4a3afa30e4 config: we dont need an explicit version number as this kernel is part of the nightly build 2012-04-05 16:27:35 +03:00
milaq
625c8174cb version: change extraversion to gb for gingerbread branch 2012-03-08 14:51:32 +01:00
milaq
9e3f7b59d7 config: minor changes 2012-03-07 23:14:35 +01:00
zeusk
acd1c1032f Merge pull request #1 from zeusk/patch-1
revert ics patch for htc headset, should fix audio lockup in cm7
2012-03-07 07:30:51 -08:00
zeusk
3f02397d56 revert ics patch for htc headset, should fix audio lockup in cm7 2012-03-07 21:00:24 +05:30
184 changed files with 11059 additions and 28330 deletions

View File

@ -1,161 +0,0 @@
Introduction
'genlock' is an in-kernel API and optional userspace interface for a generic
cross-process locking mechanism. The API is designed for situations where
multiple user space processes and/or kernel drivers need to coordinate access
to a shared resource, such as a graphics buffer. The API was designed with
graphics buffers in mind, but is sufficiently generic to allow it to be
independently used with different types of resources. The chief advantage
of genlock over other cross-process locking mechanisms is that the resources
can be accessed by both userspace and kernel drivers which allows resources
to be locked or unlocked by asynchronous events in the kernel without the
intervention of user space.
As an example, consider a graphics buffer that is shared between a rendering
application and a compositing window manager. The application renders into a
buffer. That buffer is reused by the compositing window manager as a texture.
To avoid corruption, access to the buffer needs to be restricted so that one
is not drawing on the surface while the other is reading. Locks can be
explicitly added between the rendering stages in the processes, but explicit
locks require that the application wait for rendering and purposely release the
lock. An implicit release triggered by an asynchronous event from the GPU
kernel driver, however, will let execution continue without requiring the
intercession of user space.
SW Goals
The genlock API implements exclusive write locks and shared read locks meaning
that there can only be one writer at a time, but multiple readers. Processes
that are unable to acquire a lock can be optionally blocked until the resource
becomes available.
Locks are shared between processes. Each process will have its own private
instance for a lock known as a handle. Handles can be shared between user
space and kernel space to allow a kernel driver to unlock or lock a buffer
on behalf of a user process.
Kernel API
Access to the genlock API can either be via the in-kernel API or via an
optional character device (/dev/genlock). The character device is primarily
to be used for legacy resource sharing APIs that cannot be easily changed.
New resource sharing APIs from this point should implement a scheme specific
wrapper for locking.
To create or attach to an existing lock, a process or kernel driver must first
create a handle. Each handle is linked to a single lock at any time. An entityi
may have multiple handles, each associated with a different lock. Once a handle
has been created, the owner may create a new lock or attach an existing lock
that has been exported from a different handle.
Once the handle has a lock attached, the owning process may attempt to lock the
buffer for read or write. Write locks are exclusive, meaning that only one
process may acquire it at any given time. Read locks are shared, meaning that
multiple readers can hold the lock at the same time. Attempts to acquire a read
lock with a writer active or a write lock with one or more readers or writers
active will typically cause the process to block until the lock is acquired.
When the lock is released, all waiting processes will be woken up. Ownership
of the lock is reference counted, meaning that any one owner can "lock"
multiple times. The lock will only be released from the owner when all the
references to the lock are released via unlock.
The owner of a write lock may atomically convert the lock into a read lock
(which will wake up other processes waiting for a read lock) without first
releasing the lock. The owner would simply issue a new request for a read lock.
However, the owner of a read lock cannot convert it into a write lock in the
same manner. To switch from a read lock to a write lock, the owner must
release the lock and then try to reacquire it.
These are the in-kernel API calls that drivers can use to create and
manipulate handles and locks. Handles can either be created and managed
completely inside of kernel space, or shared from user space via a file
descriptor.
* struct genlock_handle *genlock_get_handle(void)
Create a new handle.
* struct genlock_handle * genlock_get_handle_fd(int fd)
Given a valid file descriptor, return the handle associated with that
descriptor.
* void genlock_put_handle(struct genlock_handle *)
Release a handle.
* struct genlock * genlock_create_lock(struct genlock_handle *)
Create a new lock and attach it to the handle.
* struct genlock * genlock_attach_lock(struct genlock_handle *handle, int fd)
Given a valid file descriptor, get the lock associated with it and attach it to
the handle.
* void genlock_release_lock(struct genlock_handle *)
Release a lock attached to a handle.
* int genlock_lock(struct genlock_handle *, int op, int flags, u32 timeout)
Lock or unlock the lock attached to the handle. A zero timeout value will
be treated just like if the GENOCK_NOBLOCK flag is passed; if the lock
can be acquired without blocking then do so otherwise return -EAGAIN.
Function returns -ETIMEDOUT if the timeout expired or 0 if the lock was
acquired.
* int genlock_wait(struct genloc_handle *, u32 timeout)
Wait for a lock held by the handle to go to the unlocked state. A non-zero
timeout value must be passed. Returns -ETIMEDOUT if the timeout expired or
0 if the lock is in an unlocked state.
Character Device
Opening an instance to the /dev/genlock character device will automatically
create a new handle. All ioctl functions with the exception of NEW and
RELEASE use the following parameter structure:
struct genlock_lock {
int fd; /* Returned by EXPORT, used by ATTACH */
int op; /* Used by LOCK */
int flags; /* used by LOCK */
u32 timeout; /* Used by LOCK and WAIT */
}
*GENLOCK_IOC_NEW
Create a new lock and attaches it to the handle. Returns -EINVAL if the handle
already has a lock attached (use GENLOCK_IOC_RELEASE to remove it). Returns
-ENOMEM if the memory for the lock can not be allocated. No data is passed
from the user for this ioctl.
*GENLOCK_IOC_EXPORT
Export the currently attached lock to a file descriptor. The file descriptor
is returned in genlock_lock.fd.
*GENLOCK_IOC_ATTACH
Attach an exported lock file descriptor to the current handle. Return -EINVAL
if the handle already has a lock attached (use GENLOCK_IOC_RELEASE to remove
it). Pass the file descriptor in genlock_lock.fd.
*GENLOCK_IOC_LOCK
Lock or unlock the attached lock. Pass the desired operation in
genlock_lock.op:
* GENLOCK_WRLOCK - write lock
* GENLOCK_RDLOCK - read lock
* GENLOCK_UNLOCK - unlock an existing lock
Pass flags in genlock_lock.flags:
* GENLOCK_NOBLOCK - Do not block if the lock is already taken
Pass a timeout value in milliseconds in genlock_lock.timeout.
genlock_lock.flags and genlock_lock.timeout are not used for UNLOCK.
Returns -EINVAL if no lock is attached, -EAGAIN if the lock is taken and
NOBLOCK is specified or if the timeout value is zero, -ETIMEDOUT if the timeout
expires or 0 if the lock was successful.
* GENLOCK_IOC_WAIT
Wait for the lock attached to the handle to be released (i.e. goes to unlock).
This is mainly used for a thread that needs to wait for a peer to release a
lock on the same shared handle. A non-zero timeout value in milliseconds is
passed in genlock_lock.timeout. Returns 0 when the lock has been released,
-EINVAL if a zero timeout is passed, or -ETIMEDOUT if the timeout expires.
* GENLOCK_IOC_RELEASE
Use this to release an existing lock. This is useful if you wish to attach a
different lock to the same handle. You do not need to call this under normal
circumstances; when the handle is closed the reference to the lock is released.
No data is passed from the user for this ioctl.

View File

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 32
EXTRAVERSION = -ics
EXTRAVERSION = -gb
NAME = Man-Eating Seals of Antiquity
# *DOCUMENTATION*

3
README
View File

@ -30,9 +30,8 @@ Primary features:
- Wired headphones support for ICS. (Credits: zivan56)
- Backported xt_qtaguid and xt_quota2 to support data usage for ICS. (Credits: tytung)
- Improved Flashlight compatibility for ICS. (Credits: tytung)
- Backported the GPU driver to enable the Hardware Acceleration for ICS. (Credits: Securecrt and Rick_1995)
Credits: Cotulla, Markinus, Hastarin, TYTung, Letama, Rajko, Dan1j3l, Cedesmith, Arne, Trilu, Charansingh, Mdebeljuh, Jdivic, Avs333, Snq-, Savan, Drizztje, Marc1706, Zivan56, Securecrt, Rick_1995, other devs, and testers.
Credits: Cotulla, Markinus, Hastarin, TYTung, Letama, Rajko, Dan1j3l, Cedesmith, Arne, Trilu, Charansingh, Mdebeljuh, Jdivic, Avs333, Snq-, Savan, Drizztje, Marc1706, Zivan56, other devs, and testers.
===============================================================================

View File

@ -1,7 +1,7 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.32-ics
# Sat May 12 16:06:22 CST 2012
# Wed Mar 7 22:35:16 2012
#
CONFIG_ARM=y
CONFIG_SYS_SUPPORTS_APM_EMULATION=y
@ -20,7 +20,6 @@ CONFIG_ARCH_HAS_CPUFREQ=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y
CONFIG_OPROFILE_ARMV7=y
CONFIG_VECTORS_BASE=0xffff0000
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_CONSTRUCTORS=y
@ -401,10 +400,9 @@ CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_HAVE_MLOCK=y
CONFIG_HAVE_MLOCKED_PAGE_BIT=y
CONFIG_KSM=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ALIGNMENT_TRAP=y
CONFIG_ALLOW_CPU_ALIGNMENT=y
# CONFIG_UACCESS_WITH_MEMCPY is not set
#
@ -851,8 +849,6 @@ CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
# CONFIG_SYS_HYPERVISOR is not set
CONFIG_GENLOCK=y
CONFIG_GENLOCK_MISCDEVICE=y
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
CONFIG_MTD=y
@ -1365,16 +1361,6 @@ CONFIG_DAB=y
#
# Graphics support
#
CONFIG_MSM_KGSL=y
# CONFIG_MSM_KGSL_CFF_DUMP is not set
# CONFIG_MSM_KGSL_PSTMRTMDMP_CP_STAT_NO_DETAIL is not set
# CONFIG_MSM_KGSL_PSTMRTMDMP_NO_IB_DUMP is not set
# CONFIG_MSM_KGSL_PSTMRTMDMP_RB_HEX is not set
CONFIG_MSM_KGSL_MMU=y
# CONFIG_KGSL_PER_PROCESS_PAGE_TABLE is not set
CONFIG_MSM_KGSL_PAGE_TABLE_SIZE=0xFFF0000
CONFIG_MSM_KGSL_MMU_PAGE_FAULT=y
# CONFIG_MSM_KGSL_DISABLE_SHADOW_WRITES is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_FB=y
@ -1408,6 +1394,9 @@ CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_MSM=y
CONFIG_FB_MSM_LCDC=y
# CONFIG_FB_MSM_TVOUT is not set
CONFIG_GPU_MSM_KGSL=y
CONFIG_MSM_KGSL_MMU=y
# CONFIG_MSM_KGSL_PER_FD_PAGETABLE is not set
# CONFIG_MSM_HDMI is not set
CONFIG_FB_MSM_LOGO=y
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set

View File

@ -1 +0,0 @@
#include <generated/asm-offsets.h>

View File

@ -129,45 +129,6 @@ static inline void dma_free_noncoherent(struct device *dev, size_t size,
{
}
/*
* dma_coherent_pre_ops - barrier functions for coherent memory before DMA.
* A barrier is required to ensure memory operations are complete before the
* initiation of a DMA xfer.
* If the coherent memory is Strongly Ordered
* - pre ARMv7 and 8x50 guarantees ordering wrt other mem accesses
* - ARMv7 guarantees ordering only within a 1KB block, so we need a barrier
* If coherent memory is normal then we need a barrier to prevent
* reordering
*/
static inline void dma_coherent_pre_ops(void)
{
#if (__LINUX_ARM_ARCH__ >= 7)
dmb();
#else
if (arch_is_coherent())
dmb();
else
barrier();
#endif
}
/*
* dma_post_coherent_ops - barrier functions for coherent memory after DMA.
* If the coherent memory is Strongly Ordered we dont need a barrier since
* there are no speculative fetches to Strongly Ordered memory.
* If coherent memory is normal then we need a barrier to prevent reordering
*/
static inline void dma_coherent_post_ops(void)
{
#if (__LINUX_ARM_ARCH__ >= 7)
dmb();
#else
if (arch_is_coherent())
dmb();
else
barrier();
#endif
}
/**
* dma_alloc_coherent - allocate consistent memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices

View File

@ -185,12 +185,6 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
#define readsw(p,d,l) __raw_readsw(__mem_pci(p),d,l)
#define readsl(p,d,l) __raw_readsl(__mem_pci(p),d,l)
#define writeb_relaxed(v,c) ((void)__raw_writeb(v,__mem_pci(c)))
#define writew_relaxed(v,c) ((void)__raw_writew((__force u16) \
cpu_to_le16(v),__mem_pci(c)))
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32) \
cpu_to_le32(v),__mem_pci(c)))
#define writeb(v,c) __raw_writeb(v,__mem_pci(c))
#define writew(v,c) __raw_writew((__force __u16) \
cpu_to_le16(v),__mem_pci(c))

View File

@ -1,75 +0,0 @@
/*
* arch/arm/include/asm/outercache.h
*
* Copyright (C) 2010 ARM Ltd.
* Written by Catalin Marinas <catalin.marinas@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef __ASM_OUTERCACHE_H
#define __ASM_OUTERCACHE_H
struct outer_cache_fns {
void (*inv_range)(unsigned long, unsigned long);
void (*clean_range)(unsigned long, unsigned long);
void (*flush_range)(unsigned long, unsigned long);
#ifdef CONFIG_OUTER_CACHE_SYNC
void (*sync)(void);
#endif
};
#ifdef CONFIG_OUTER_CACHE
extern struct outer_cache_fns outer_cache;
static inline void outer_inv_range(unsigned long start, unsigned long end)
{
if (outer_cache.inv_range)
outer_cache.inv_range(start, end);
}
static inline void outer_clean_range(unsigned long start, unsigned long end)
{
if (outer_cache.clean_range)
outer_cache.clean_range(start, end);
}
static inline void outer_flush_range(unsigned long start, unsigned long end)
{
if (outer_cache.flush_range)
outer_cache.flush_range(start, end);
}
#else
static inline void outer_inv_range(unsigned long start, unsigned long end)
{ }
static inline void outer_clean_range(unsigned long start, unsigned long end)
{ }
static inline void outer_flush_range(unsigned long start, unsigned long end)
{ }
#endif
#ifdef CONFIG_OUTER_CACHE_SYNC
static inline void outer_sync(void)
{
if (outer_cache.sync)
outer_cache.sync();
}
#else
static inline void outer_sync(void)
{ }
#endif
#endif /* __ASM_OUTERCACHE_H */

View File

@ -382,13 +382,11 @@ ENDPROC(sys_clone_wrapper)
sys_sigreturn_wrapper:
add r0, sp, #S_OFF
mov why, #0 @ prevent syscall restart handling
b sys_sigreturn
ENDPROC(sys_sigreturn_wrapper)
sys_rt_sigreturn_wrapper:
add r0, sp, #S_OFF
mov why, #0 @ prevent syscall restart handling
b sys_rt_sigreturn
ENDPROC(sys_rt_sigreturn_wrapper)

View File

@ -158,10 +158,10 @@ __secondary_data:
* registers.
*/
__enable_mmu:
#ifdef CONFIG_ALLOW_CPU_ALIGNMENT
bic r0, r0, #CR_A
#ifdef CONFIG_ALIGNMENT_TRAP
orr r0, r0, #CR_A
#else
orr r0, r0, #CR_A
bic r0, r0, #CR_A
#endif
#ifdef CONFIG_CPU_DCACHE_DISABLE
bic r0, r0, #CR_C

View File

@ -22,7 +22,6 @@
#include <linux/errno.h>
#include <linux/cpufreq.h>
#include <linux/regulator/consumer.h>
#include <linux/regulator/driver.h>
#include <mach/board.h>
#include <mach/msm_iomap.h>
@ -63,19 +62,6 @@ struct clkctl_acpu_speed {
unsigned axiclk_khz;
};
static unsigned long max_axi_rate;
struct regulator {
struct device *dev;
struct list_head list;
int uA_load;
int min_uV;
int max_uV;
char *supply_name;
struct device_attribute dev_attr;
struct regulator_dev *rdev;
};
/* clock sources */
#define CLK_TCXO 0 /* 19.2 MHz */
#define CLK_GLOBAL_PLL 1 /* 768 MHz */
@ -90,46 +76,135 @@ struct regulator {
#define SRC_PLL1 3 /* 768 MHz */
struct clkctl_acpu_speed acpu_freq_tbl[] = {
{ 19200, CCTL(CLK_TCXO, 1), SRC_RAW, 0, 0, 1000, 14000},
{ 96000, CCTL(CLK_TCXO, 1), SRC_AXI, 0, 0, 1000, 14000 },
#ifdef CONFIG_HTCLEO_UNDERVOLT_1000
{ 19200, CCTL(CLK_TCXO, 1), SRC_RAW, 0, 0, 1000, 14000 },
{ 128000, CCTL(CLK_TCXO, 1), SRC_AXI, 0, 0, 1000, 14000 },
{ 245000, CCTL(CLK_MODEM_PLL, 1), SRC_RAW, 0, 0, 1000, 29000 },
/* Work arround for acpu resume hung, GPLL is turn off by arm9 */
/*{ 256000, CCTL(CLK_GLOBAL_PLL, 3), SRC_RAW, 0, 0, 1050, 29000 },*/
{ 384000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0A, 0, 1000, 58000 },
{ 422400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0B, 0, 1000, 117000 },
{ 460800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0C, 0, 1000, 117000 },
{ 499200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0D, 0, 1050, 117000 },
{ 537600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0E, 0, 1050, 117000 },
{ 576000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0F, 0, 1050, 117000 },
{ 614400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x10, 0, 1075, 117000 },
{ 652800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x11, 0, 1100, 117000 },
{ 691200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x12, 0, 1125, 117000 },
{ 729600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x13, 0, 1150, 117000 },
{ 768000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x14, 0, 1150, 128000 },
{ 806400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x15, 0, 1175, 128000 },
{ 844800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x16, 0, 1225, 128000 },
{ 883200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x17, 0, 1250, 128000 },
{ 921600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x18, 0, 1300, 128000 },
{ 960000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x19, 0, 1300, 128000 },
{ 998400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1A, 0, 1300, 128000 },
#ifdef CONFIG_HTCLEO_OVERCLOCK
{ 1036800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1B, 0, 1300, 128000 },
{ 1075200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1C, 0, 1300, 128000 },
{ 1113600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1D, 0, 1300, 128000 },
{ 1152000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1E, 0, 1300, 128000 },
{ 1190400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1F, 0, 1325, 128000 },
//{ 256000, CCTL(CLK_GLOBAL_PLL, 3), SRC_RAW, 0, 0, 1000, 29000 },
{ 384000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0A, 0, 1000, 58000 },
{ 422400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0B, 0, 1000, 117000 },
{ 460800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0C, 0, 1000, 117000 },
{ 499200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0D, 0, 1025, 117000 },
{ 537600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0E, 0, 1050, 117000 },
{ 576000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0F, 0, 1050, 117000 },
{ 614400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x10, 0, 1075, 117000 },
{ 652800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x11, 0, 1100, 117000 },
{ 691200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x12, 0, 1125, 117000 },
{ 729600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x13, 0, 1150, 117000 },
{ 768000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x14, 0, 1150, 128000 },
{ 806400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x15, 0, 1175, 128000 },
{ 844800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x16, 0, 1200, 128000 },
{ 883200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x17, 0, 1200, 128000 },
{ 921600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x18, 0, 1225, 128000 },
{ 960000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x19, 0, 1225, 128000 },
{ 998400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1A, 0, 1225, 128000 },
#elif CONFIG_HTCLEO_UNDERVOLT_925
// should work with most of HD2s
{ 19200, CCTL(CLK_TCXO, 1), SRC_RAW, 0, 0, 925, 14000 },
{ 128000, CCTL(CLK_TCXO, 1), SRC_AXI, 0, 0, 925, 14000 },
{ 245000, CCTL(CLK_MODEM_PLL, 1), SRC_RAW, 0, 0, 925, 29000 },
//{ 256000, CCTL(CLK_GLOBAL_PLL, 3), SRC_RAW, 0, 0, 925, 29000 },
{ 384000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0A, 0, 950, 58000 },
{ 422400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0B, 0, 975, 117000 },
{ 460800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0C, 0, 1000, 117000 },
{ 499200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0D, 0, 1025, 117000 },
{ 537600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0E, 0, 1050, 117000 },
{ 576000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0F, 0, 1050, 117000 },
{ 614400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x10, 0, 1075, 117000 },
{ 652800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x11, 0, 1100, 117000 },
{ 691200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x12, 0, 1125, 117000 },
{ 729600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x13, 0, 1150, 117000 },
{ 768000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x14, 0, 1150, 128000 },
{ 806400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x15, 0, 1175, 128000 },
{ 844800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x16, 0, 1200, 128000 },
{ 883200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x17, 0, 1200, 128000 },
{ 921600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x18, 0, 1225, 128000 },
{ 960000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x19, 0, 1225, 128000 },
{ 998400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1A, 0, 1225, 128000 },
#elif CONFIG_HTCLEO_UNDERVOLT_800
// not working yet
{ 19200, CCTL(CLK_TCXO, 1), SRC_RAW, 0, 0, 850, 14000 },
{ 128000, CCTL(CLK_TCXO, 1), SRC_AXI, 0, 0, 850, 14000 },
{ 245000, CCTL(CLK_MODEM_PLL, 1), SRC_RAW, 0, 0, 850, 29000 },
//{ 256000, CCTL(CLK_GLOBAL_PLL, 3), SRC_RAW, 0, 0, 850, 29000 },
{ 384000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0A, 0, 850, 58000 },
{ 422400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0B, 0, 875, 117000 },
{ 460800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0C, 0, 900, 117000 },
{ 499200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0D, 0, 925, 117000 },
{ 537600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0E, 0, 950, 117000 },
{ 576000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0F, 0, 950, 117000 },
{ 614400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x10, 0, 975, 117000 },
{ 652800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x11, 0, 1000, 117000 },
{ 691200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x12, 0, 1025, 117000 },
{ 729600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x13, 0, 1050, 117000 },
{ 768000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x14, 0, 1125, 128000 },
{ 806400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x15, 0, 1125, 128000 },
{ 844800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x16, 0, 1150, 128000 },
{ 883200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x17, 0, 1150, 128000 },
{ 921600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x18, 0, 1175, 128000 },
{ 960000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x19, 0, 1175, 128000 },
{ 998400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1A, 0, 1200, 128000 },
#else
{ 19200, CCTL(CLK_TCXO, 1), SRC_RAW, 0, 0, 1050, 14000},
{ 128000, CCTL(CLK_TCXO, 1), SRC_AXI, 0, 0, 1050, 14000 },
{ 245000, CCTL(CLK_MODEM_PLL, 1), SRC_RAW, 0, 0, 1050, 29000 },
/* Work arround for acpu resume hung, GPLL is turn off by arm9 */
/*{ 256000, CCTL(CLK_GLOBAL_PLL, 3), SRC_RAW, 0, 0, 1050, 29000 },*/
{ 384000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0A, 0, 1050, 58000 },
{ 422400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0B, 0, 1050, 117000 },
{ 460800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0C, 0, 1050, 117000 },
{ 499200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0D, 0, 1075, 117000 },
{ 537600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0E, 0, 1100, 117000 },
{ 576000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x0F, 0, 1100, 117000 },
{ 614400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x10, 0, 1125, 117000 },
{ 652800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x11, 0, 1150, 117000 },
{ 691200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x12, 0, 1175, 117000 },
{ 729600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x13, 0, 1200, 117000 },
{ 768000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x14, 0, 1200, 128000 },
{ 806400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x15, 0, 1225, 128000 },
{ 844800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x16, 0, 1250, 128000 },
{ 883200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x17, 0, 1275, 128000 },
{ 921600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x18, 0, 1300, 128000 },
{ 960000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x19, 0, 1300, 128000 },
{ 998400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1A, 0, 1300, 128000 },
#endif
#ifdef CONFIG_HTCLEO_OVERCLOCK
#ifdef CONFIG_HTCLEO_UNDERVOLT_1000
{ 1036800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1B, 0, 1225, 128000 },
{ 1075200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1C, 0, 1250, 128000 },
{ 1113600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1D, 0, 1275, 128000 },
{ 1152000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1E, 0, 1300, 128000 },
{ 1190400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1F, 0, 1325, 128000 },
#elif CONFIG_HTCLEO_UNDERVOLT_925
{ 1036800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1B, 0, 1225, 128000 },
{ 1075200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1C, 0, 1250, 128000 },
{ 1113600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1D, 0, 1275, 128000 },
{ 1152000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1E, 0, 1300, 128000 },
{ 1190400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1F, 0, 1325, 128000 },
#elif CONFIG_HTCLEO_UNDERVOLT_800
{ 1036800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1B, 0, 1225, 128000 },
{ 1075200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1C, 0, 1250, 128000 },
{ 1113600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1D, 0, 1275, 128000 },
{ 1152000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1E, 0, 1300, 128000 },
{ 1190400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1F, 0, 1325, 128000 },
#else
{ 1036800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1B, 0, 1300, 128000 },
{ 1075200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1C, 0, 1300, 128000 },
{ 1113600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1D, 0, 1300, 128000 },
{ 1152000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1E, 0, 1325, 128000 },
{ 1190400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x1F, 0, 1325, 128000 },
#endif
#endif
#ifdef CONFIG_HTCLEO_EXOVERCLOCK
{ 1228800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x20, 0, 1325, 128000 },
{ 1267200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x21, 0, 1350, 128000 },
{ 1305600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x22, 0, 1350, 128000 },
{ 1344000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x23, 0, 1350, 128000 },
{ 1382400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x24, 0, 1350, 128000 },
{ 1420800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x25, 0, 1350, 128000 },
{ 1459200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x26, 0, 1350, 128000 },
{ 1497600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x27, 0, 1350, 128000 },
{ 1536000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x28, 0, 1350, 128000 },
{ 1228800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x20, 0, 1325, 128000 },
{ 1267200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x21, 0, 1350, 128000 },
{ 1305600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x22, 0, 1350, 128000 },
{ 1344000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x23, 0, 1350, 128000 },
{ 1382400, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x24, 0, 1350, 128000 },
{ 1420800, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x25, 0, 1350, 128000 },
{ 1459200, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x26, 0, 1350, 128000 },
{ 1497600, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x27, 0, 1350, 128000 },
{ 1536000, CCTL(CLK_TCXO, 1), SRC_SCPLL, 0x28, 0, 1350, 128000 },
#endif
{ 0 },
};
@ -156,10 +231,11 @@ static void __init acpuclk_init_cpufreq_table(void)
freq_table[i].index = i;
freq_table[i].frequency = CPUFREQ_ENTRY_INVALID;
/* Skip speeds using the global pll */
if (acpu_freq_tbl[i].acpu_khz == 256000 ||
acpu_freq_tbl[i].acpu_khz == 19200)
continue;
/* Skip speeds we don't want */
if ( acpu_freq_tbl[i].acpu_khz == 19200 ||
//acpu_freq_tbl[i].acpu_khz == 128000 ||
acpu_freq_tbl[i].acpu_khz == 256000)
continue;
vdd = acpu_freq_tbl[i].vdd;
/* Allow mpll and the first scpll speeds */
@ -193,7 +269,6 @@ struct clock_state {
unsigned long wait_for_irq_khz;
struct clk* clk_ebi1;
struct regulator *regulator;
int (*acpu_set_vdd) (int mvolts);
};
static struct clock_state drv_state = { 0 };
@ -270,10 +345,11 @@ static void scpll_set_freq(uint32_t lval)
dmb();
/* wait for frequency switch to finish */
while (readl(SCPLL_STATUS_ADDR) & 0x1);
while (readl(SCPLL_STATUS_ADDR) & 0x1)
;
/* completion bit is not reliable for SHOT switch */
udelay(15);
udelay(25);
}
/* write the new L val and switch mode */
@ -287,7 +363,8 @@ static void scpll_set_freq(uint32_t lval)
dmb();
/* wait for frequency switch to finish */
while (readl(SCPLL_STATUS_ADDR) & 0x1);
while (readl(SCPLL_STATUS_ADDR) & 0x1)
;
}
/* this is still a bit weird... */
@ -548,20 +625,13 @@ static void __init acpuclk_init(void)
}
drv_state.current_speed = speed;
for (speed = acpu_freq_tbl; speed->acpu_khz; speed++) {
for (speed = acpu_freq_tbl; speed->acpu_khz; speed++)
speed->lpj = cpufreq_scale(loops_per_jiffy,
init_khz, speed->acpu_khz);
max_axi_rate = speed->axiclk_khz * 1000;
}
loops_per_jiffy = drv_state.current_speed->lpj;
}
unsigned long acpuclk_get_max_axi_rate(void)
{
return max_axi_rate;
}
unsigned long acpuclk_get_rate(void)
{
return drv_state.current_speed->acpu_khz;
@ -604,7 +674,6 @@ void __init msm_acpu_clock_init(struct msm_acpu_clock_platform_data *clkdata)
drv_state.vdd_switch_time_us = clkdata->vdd_switch_time_us;
drv_state.power_collapse_khz = clkdata->power_collapse_khz;
drv_state.wait_for_irq_khz = clkdata->wait_for_irq_khz;
drv_state.acpu_set_vdd = acpuclk_set_vdd_level;
if (clkdata->mpll_khz)
acpu_mpll->acpu_khz = clkdata->mpll_khz;
@ -639,7 +708,7 @@ ssize_t acpuclk_get_vdd_levels_str(char *buf)
void acpuclk_set_vdd(unsigned acpu_khz, int vdd)
{
int i;
vdd = (vdd / HTCLEO_TPS65023_UV_STEP_MV) * HTCLEO_TPS65023_UV_STEP_MV;
vdd = vdd / 25 * 25; //! regulator only accepts multiples of 25 (mV)
mutex_lock(&drv_state.lock);
for (i = 0; acpu_freq_tbl[i].acpu_khz; i++)
{
@ -653,16 +722,5 @@ void acpuclk_set_vdd(unsigned acpu_khz, int vdd)
}
mutex_unlock(&drv_state.lock);
}
unsigned int acpuclk_get_vdd_min(void)
{
return HTCLEO_TPS65023_MIN_UV_MV;
}
unsigned int acpuclk_get_vdd_max(void)
{
return HTCLEO_TPS65023_MAX_UV_MV;
}
unsigned int acpuclk_get_vdd_step(void)
{
return HTCLEO_TPS65023_UV_STEP_MV;
}
#endif

View File

@ -39,6 +39,7 @@
#define HTCLEO_DEFAULT_BACKLIGHT_BRIGHTNESS 255
static struct led_trigger *htcleo_lcd_backlight;
static int auto_bl_state=0;
static DEFINE_MUTEX(htcleo_backlight_lock);

View File

@ -118,7 +118,7 @@ int lightsensor_read_value(uint32_t *val)
}
*val = data[1] | (data[0] << 8);
D("lsensor adc = %u\n", *val); /* val is unsigned */
D("lsensor adc = %d\n", *val);
return 0;
}

View File

@ -30,7 +30,6 @@
#include <mach/vreg.h>
#include <mach/gpio.h>
#include <mach/board-htcleo-mmc.h>
#include "board-htcleo.h"
#include "devices.h"
@ -392,7 +391,7 @@ static int __init htcleommc_dbg_init(void)
{
struct dentry *dent;
if (!machine_is_htcleo())
if (!machine_is_htcleo() && !machine_is_htcleo())
return 0;
dent = debugfs_create_dir("htcleo_mmc_dbg", 0);

View File

@ -25,20 +25,17 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/blktrans.h>
#include <mach/msm_iomap.h>
#include <linux/crc32.h>
#include <linux/io.h>
#include "board-htcleo.h"
#include <mach/board-htcleo-mac.h>
#define NVS_MAX_SIZE 0x800U
#define NVS_MACADDR_SIZE 0x1AU
#define WLAN_SKB_BUF_NUM 16
/*
* wifi mac address will be parsed in msm_nand_probe
* see drivers/mtd/devices/htcleo_nand.c
*/
static struct proc_dir_entry *wifi_calibration;
char nvs_mac_addr[NVS_MACADDR_SIZE];
static unsigned char nvs_mac_addr[NVS_MACADDR_SIZE];
static unsigned char *hardcoded_nvs =
"sromrev=3\n"\
"vendid=0x14e4\n"\
@ -85,7 +82,35 @@ unsigned char *get_wifi_nvs_ram( void )
}
EXPORT_SYMBOL(get_wifi_nvs_ram);
static int parse_tag_msm_wifi(void)
{
uint32_t id1, id2, id3, sid1, sid2, sid3;
uint32_t id_base = 0xef260;
id1 = readl(MSM_SHARED_RAM_BASE + id_base + 0x0);
id2 = readl(MSM_SHARED_RAM_BASE + id_base + 0x4);
id3 = readl(MSM_SHARED_RAM_BASE + id_base + 0x8);
sid1 = crc32(~0, &id1, 4);
sid2 = crc32(~0, &id2, 4);
sid3 = crc32(~0, &id3, 4);
sprintf(nvs_mac_addr, "macaddr=00:23:76:%2x:%2x:%2x\n", sid1 % 0xff, sid2 % 0xff, sid3 % 0xff);
pr_info("Device Wifi Mac Address: %s\n", nvs_mac_addr);
return 0;
}
static int parse_tag_msm_wifi_from_spl(void)
{
uint32_t id1, id2, id3, id4, id5, id6;
uint32_t id_base = 0xFC028; //real mac offset found in spl for haret.exe on WM
id1 = readl(MSM_SPLHOOD_BASE + id_base + 0x0);
id2 = readl(MSM_SPLHOOD_BASE + id_base + 0x1);
id3 = readl(MSM_SPLHOOD_BASE + id_base + 0x2);
id4 = readl(MSM_SPLHOOD_BASE + id_base + 0x3);
id5 = readl(MSM_SPLHOOD_BASE + id_base + 0x4);
id6 = readl(MSM_SPLHOOD_BASE + id_base + 0x5);
sprintf(nvs_mac_addr, "macaddr=%2x:%2x:%2x:%2x:%2x:%2x\n", id1 & 0xff, id2 & 0xff, id3 & 0xff, id4 & 0xff, id5 & 0xff, id6 & 0xff);
pr_info("Device Real Wifi Mac Address: %s\n", nvs_mac_addr);
return 0;
}
static unsigned wifi_get_nvs_size( void )
{
@ -127,6 +152,11 @@ static int wifi_calibration_read_proc(char *page, char **start, off_t off,
static int __init wifi_nvs_init(void)
{
pr_info("%s\n", __func__);
if (htcleo_is_nand_boot()) {
parse_tag_msm_wifi();
} else {
parse_tag_msm_wifi_from_spl();
}
wifi_calibration = create_proc_entry("calibration", 0444, NULL);
if (wifi_calibration != NULL) {
wifi_calibration->size = wifi_get_nvs_size();

View File

@ -15,6 +15,7 @@
*
*/
#include <linux/crc32.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/i2c.h>
@ -54,17 +55,10 @@
#ifdef CONFIG_SERIAL_BCM_BT_LPM
#include <mach/bcm_bt_lpm.h>
#endif
#ifdef CONFIG_PERFLOCK
#include <mach/perflock.h>
#endif
#include <mach/htc_headset_mgr.h>
#include <mach/htc_headset_gpio.h>
#ifdef CONFIG_MSM_KGSL
#include <linux/msm_kgsl.h>
#endif
#include <mach/board-htcleo-mac.h>
#include <mach/board-htcleo-microp.h>
#include "board-htcleo.h"
@ -397,9 +391,10 @@ static uint32_t flashlight_gpio_table[] =
PCOM_GPIO_CFG(HTCLEO_GPIO_FLASHLIGHT_FLASH, 0, GPIO_OUTPUT, GPIO_NO_PULL, GPIO_2MA),
};
static void config_htcleo_flashlight_gpios(void)
static int config_htcleo_flashlight_gpios(void)
{
config_gpio_table(flashlight_gpio_table, ARRAY_SIZE(flashlight_gpio_table));
return 0;
}
static struct flashlight_platform_data htcleo_flashlight_data =
@ -531,19 +526,87 @@ static struct platform_device msm_camera_sensor_s5k3e2fx =
},
};
//-----PATCH for BT mac address
int is_valid_mac_address(char *mac)
{
int i =0;
while(i<17){
if( (i%3) == 2){
if ((mac[i] !=':') && (mac[i] = '-')) return 0;
if (mac[i] == '-') mac[i] = ':';
}else{
if ( !( ((mac[i] >= '0') && (mac[i] <= '9')) ||
((mac[i] >= 'a') && (mac[i] <= 'f')) ||
((mac[i] >= 'A') && (mac[i] <= 'F')))
) return 0;
}
i++;
}
if (mac[i] != '\0') return 0;
return 1;
}
//-----------------------------
///////////////////////////////////////////////////////////////////////
// bluetooth
///////////////////////////////////////////////////////////////////////
/* AOSP style interface */
/*
* bluetooth mac address will be parsed in msm_nand_probe
* see drivers/mtd/devices/htcleo_nand.c
*/
char bdaddr[BDADDR_STR_SIZE];
#define BDADDR_STR_SIZE 18
static char bdaddr[BDADDR_STR_SIZE];
module_param_string(bdaddr, bdaddr, sizeof(bdaddr), 0400);
MODULE_PARM_DESC(bdaddr, "bluetooth address");
static int parse_tag_bdaddr(void)
{
uint32_t id1, id2, id3, sid1, sid2, sid3;
uint32_t id_base = 0xef260;
id1 = readl(MSM_SHARED_RAM_BASE + id_base + 0x0);
id2 = readl(MSM_SHARED_RAM_BASE + id_base + 0x4);
id3 = readl(MSM_SHARED_RAM_BASE + id_base + 0x8);
sid1 = crc32(~0, &id1, 4);
sid2 = crc32(~0, &id2, 4);
sid3 = crc32(~0, &id3, 4);
sprintf(bdaddr, "00:23:76:%2X:%2X:%2X", sid3 % 0xff, sid2 % 0xff, sid1 % 0xff);
pr_info("Device Bluetooth Mac Address: %s\n", bdaddr);
return 0;
}
/* end AOSP style interface */
/* for (sense roms) */
#define MAC_ADDRESS_SIZE_C 17
static char bdaddress[MAC_ADDRESS_SIZE_C+1] = "";
static void bt_export_bd_address(void)
{
unsigned char cTemp[6];
if (!is_valid_mac_address(bdaddress)){
memcpy(cTemp, get_bt_bd_ram(), 6);
sprintf(bdaddress, "%02x:%02x:%02x:%02x:%02x:%02x", cTemp[0], cTemp[1], cTemp[2], cTemp[3], cTemp[4], cTemp[5]);
pr_info("BD_ADDRESS=%s\n", bdaddress);
}
}
module_param_string(bdaddress, bdaddress, sizeof(bdaddress), S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(bdaddress, "BT MAC ADDRESS");
#define MAX_BT_SIZE 0x6U
static unsigned char bt_bd_ram[MAX_BT_SIZE] = {0x50,0xC3,0x00,0x00,0x00,0x00};
unsigned char *get_bt_bd_ram(void)
{
return (bt_bd_ram);
}
//-----added alias for bt mac address parameter--------
static int __init htcleo_bt_macaddress_setup(char *bootconfig)
{
printk("%s: cmdline bt mac config=%s | %s\n",__FUNCTION__, bootconfig, __FILE__);
strncpy(bdaddress, bootconfig, MAC_ADDRESS_SIZE_C);
return 1;
}
__setup("bt.mac=", htcleo_bt_macaddress_setup);
//-----------------------------------------------------
/* end (sense) */
#ifdef CONFIG_SERIAL_MSM_HS
static struct msm_serial_hs_platform_data msm_uart_dm1_pdata = {
@ -681,6 +744,28 @@ static struct platform_device qsd_device_spi = {
///////////////////////////////////////////////////////////////////////
// KGSL (HW3D support)#include <linux/android_pmem.h>
///////////////////////////////////////////////////////////////////////
static struct resource msm_kgsl_resources[] =
{
{
.name = "kgsl_reg_memory",
.start = MSM_GPU_REG_PHYS,
.end = MSM_GPU_REG_PHYS + MSM_GPU_REG_SIZE - 1,
.flags = IORESOURCE_MEM,
},
{
.name = "kgsl_phys_memory",
.start = MSM_GPU_PHYS_BASE,
.end = MSM_GPU_PHYS_BASE + MSM_GPU_PHYS_SIZE - 1,
.flags = IORESOURCE_MEM,
},
{
.start = INT_GRAPHICS,
.end = INT_GRAPHICS,
.flags = IORESOURCE_IRQ,
},
};
static int htcleo_kgsl_power_rail_mode(int follow_clk)
{
int mode = follow_clk ? 0 : 1;
@ -697,80 +782,27 @@ static int htcleo_kgsl_power(bool on)
return msm_proc_comm(cmd, &rail_id, 0);
}
/* start kgsl-3d0 */
static struct resource kgsl_3d0_resources[] = {
{
.name = KGSL_3D0_REG_MEMORY,
.start = 0xA0000000,
.end = 0xA001ffff,
.flags = IORESOURCE_MEM,
},
{
.name = KGSL_3D0_IRQ,
.start = INT_GRAPHICS,
.end = INT_GRAPHICS,
.flags = IORESOURCE_IRQ,
},
static struct platform_device msm_kgsl_device =
{
.name = "kgsl",
.id = -1,
.resource = msm_kgsl_resources,
.num_resources = ARRAY_SIZE(msm_kgsl_resources),
};
static struct kgsl_device_platform_data kgsl_3d0_pdata = {
.pwr_data = {
.pwrlevel = {
{
.gpu_freq = 0,
.bus_freq = 128000000,
},
},
.init_level = 0,
.num_levels = 1,
.set_grp_async = NULL,
.idle_timeout = HZ/5,
},
.clk = {
.name = {
.clk = "grp_clk",
},
},
.imem_clk_name = {
.clk = "imem_clk",
},
};
struct platform_device msm_kgsl_3d0 = {
.name = "kgsl-3d0",
.id = 0,
.num_resources = ARRAY_SIZE(kgsl_3d0_resources),
.resource = kgsl_3d0_resources,
.dev = {
.platform_data = &kgsl_3d0_pdata,
},
};
/* end kgsl-3d0 */
///////////////////////////////////////////////////////////////////////
// Memory
///////////////////////////////////////////////////////////////////////
static struct android_pmem_platform_data mdp_pmem_pdata = {
.name = "pmem",
.start = MSM_PMEM_MDP_BASE,
.size = MSM_PMEM_MDP_SIZE,
#ifdef CONFIG_MSM_KGSL
.allocator_type = PMEM_ALLOCATORTYPE_BITMAP,
#else
.no_allocator = 0,
#endif
.cached = 1,
};
static struct android_pmem_platform_data android_pmem_adsp_pdata = {
.name = "pmem_adsp",
.start = MSM_PMEM_ADSP_BASE,
.size = MSM_PMEM_ADSP_SIZE,
#ifdef CONFIG_MSM_KGSL
.allocator_type = PMEM_ALLOCATORTYPE_BITMAP,
#else
.no_allocator = 0,
#endif
.cached = 1,
};
@ -779,11 +811,7 @@ static struct android_pmem_platform_data android_pmem_venc_pdata = {
.name = "pmem_venc",
.start = MSM_PMEM_VENC_BASE,
.size = MSM_PMEM_VENC_SIZE,
#ifdef CONFIG_MSM_KGSL
.allocator_type = PMEM_ALLOCATORTYPE_BITMAP,
#else
.no_allocator = 0,
#endif
.cached = 1,
};
@ -797,7 +825,7 @@ static struct platform_device android_pmem_mdp_device = {
static struct platform_device android_pmem_adsp_device = {
.name = "android_pmem",
.id = 1, /* 4 */
.id = 4,
.dev = {
.platform_data = &android_pmem_adsp_pdata,
},
@ -805,7 +833,7 @@ static struct platform_device android_pmem_adsp_device = {
static struct platform_device android_pmem_venc_device = {
.name = "android_pmem",
.id = 3, /* 5 */
.id = 5,
.dev = {
.platform_data = &android_pmem_venc_pdata,
},
@ -917,11 +945,7 @@ static struct platform_device *devices[] __initdata =
&msm_device_i2c,
&ds2746_battery_pdev,
&htc_battery_pdev,
#ifdef CONFIG_MSM_KGSL
&msm_kgsl_3d0,
#else
&msm_kgsl_device,
#endif
&msm_camera_sensor_s5k3e2fx,
&htcleo_flashlight_device,
&qsd_device_spi,
@ -983,7 +1007,6 @@ static struct msm_acpu_clock_platform_data htcleo_clock_data = {
// .wait_for_irq_khz = 19200, // TCXO
};
#ifdef CONFIG_PERFLOCK
static unsigned htcleo_perf_acpu_table[] = {
245000000,
576000000,
@ -994,8 +1017,6 @@ static struct perflock_platform_data htcleo_perflock_data = {
.perf_acpu_table = htcleo_perf_acpu_table,
.table_size = ARRAY_SIZE(htcleo_perf_acpu_table),
};
#endif
///////////////////////////////////////////////////////////////////////
// Reset
///////////////////////////////////////////////////////////////////////
@ -1042,9 +1063,7 @@ static void __init htcleo_init(void)
msm_acpu_clock_init(&htcleo_clock_data);
#ifdef CONFIG_PERFLOCK
perflock_init(&htcleo_perflock_data);
#endif
#if defined(CONFIG_MSM_SERIAL_DEBUGGER)
msm_serial_debug_init(MSM_UART1_PHYS, INT_UART1,
@ -1061,6 +1080,10 @@ static void __init htcleo_init(void)
config_gpio_table(bt_gpio_table, ARRAY_SIZE(bt_gpio_table));
parse_tag_bdaddr();
bt_export_bd_address();
htcleo_audio_init();
msm_device_i2c_init();

View File

@ -38,12 +38,6 @@
#define MSM_FB_BASE MSM_PMEM_SMI_BASE
#define MSM_FB_SIZE 0x00600000
#define MSM_PMEM_MDP_BASE 0x3B700000
#define MSM_PMEM_MDP_SIZE 0x02000000
#define MSM_PMEM_ADSP_BASE 0x3D700000
#define MSM_PMEM_ADSP_SIZE 0x02900000
#define MSM_GPU_PHYS_BASE (MSM_PMEM_SMI_BASE + MSM_FB_SIZE)
#define MSM_GPU_PHYS_SIZE 0x00800000
/* #define MSM_GPU_PHYS_SIZE 0x00300000 */
@ -60,6 +54,8 @@
#define MSM_PMEM_SF_SIZE 0x02000000
#define MSM_PMEM_ADSP_SIZE 0x02196000
/* MSM_RAM_CONSOLE uses the last 0x00040000 of EBI memory, defined in msm_iomap.h
#define MSM_RAM_CONSOLE_SIZE 0x00040000
#define MSM_RAM_CONSOLE_BASE (MSM_EBI1_BANK0_BASE + MSM_EBI1_BANK0_SIZE - MSM_RAM_CONSOLE_SIZE) //0x2FFC0000
@ -178,8 +174,7 @@
/* Voltage driver */
#define HTCLEO_TPS65023_MIN_UV_MV (800)
#define HTCLEO_TPS65023_MAX_UV_MV (1375)
#define HTCLEO_TPS65023_UV_STEP_MV (25)
#define HTCLEO_TPS65023_MAX_UV_MV (1350)
/* LEDS */
#define LED_RGB (1 << 0)
@ -197,11 +192,11 @@ struct microp_led_platform_data {
int num_leds;
};
int htcleo_pm_set_vreg(int enable, unsigned id);
int __init htcleo_init_panel(void);
int htcleo_is_nand_boot(void);
unsigned htcleo_get_vbus_state(void);
void config_camera_on_gpios(void);
void config_camera_off_gpios(void);
#endif /* __ARCH_ARM_MACH_MSM_BOARD_HTCLEO_H */

View File

@ -34,8 +34,6 @@
//#define ENABLE_CLOCK_INFO 1
extern struct clk msm_clocks[];
static DEFINE_MUTEX(clocks_mutex);
static DEFINE_SPINLOCK(clocks_lock);
static LIST_HEAD(clocks);
@ -235,16 +233,8 @@ struct mdns_clock_params msm_clock_freq_parameters[] = {
MSM_CLOCK_REG(64000000,0x19, 0x60, 0x30, 0, 2, 4, 1, 245760000), /* BT, 4000000 (*16) */
};
int status_set_grp_clk = 0;
int i_set_grp_clk = 0;
int control_set_grp_clk;
static void set_grp_clk( int on )
{
int i = 0;
int status = 0;
int control;
if ( on != 0 )
{
//axi_reset
@ -284,7 +274,8 @@ static void set_grp_clk( int on )
writel(readl(MSM_CLK_CTL_BASE) |0x8, MSM_CLK_CTL_BASE);
//grp MD
writel(readl(MSM_CLK_CTL_BASE+0x80) |0x1, MSM_CLK_CTL_BASE+0x80); //PRPH_WEB_NS_REG
int i = 0;
int status = 0;
while ( status == 0 && i < 100) {
i++;
status = readl(MSM_CLK_CTL_BASE+0x84) & 0x1;
@ -306,7 +297,7 @@ static void set_grp_clk( int on )
writel(readl(MSM_CLK_CTL_BASE+0x290) |0x4, MSM_CLK_CTL_BASE+0x290); //MSM_RAIL_CLAMP_IO
writel( 0x11f, MSM_CLK_CTL_BASE+0x284); //VDD_GRP_GFS_CTL
control = readl(MSM_CLK_CTL_BASE+0x288); //VDD_VDC_GFS_CTL
int control = readl(MSM_CLK_CTL_BASE+0x288); //VDD_VDC_GFS_CTL
if ( control & 0x100 )
writel(readl(MSM_CLK_CTL_BASE) &(~(0x8)), MSM_CLK_CTL_BASE);
}
@ -1300,18 +1291,5 @@ static int __init clock_late_init(void)
//pr_info("reset imem_config\n");
return 0;
}
late_initcall(clock_late_init);
struct clk_ops clk_ops_pcom = {
.enable = pc_clk_enable,
.disable = pc_clk_disable,
.auto_off = pc_clk_disable,
// .reset = pc_clk_reset,
.set_rate = pc_clk_set_rate,
.set_min_rate = pc_clk_set_min_rate,
.set_max_rate = pc_clk_set_max_rate,
.set_flags = pc_clk_set_flags,
.get_rate = pc_clk_get_rate,
.is_enabled = pc_clk_is_enabled,
// .round_rate = pc_clk_round_rate,
};
late_initcall(clock_late_init);

View File

@ -21,7 +21,6 @@
#include <linux/list.h>
#include <linux/err.h>
#include <linux/clk.h>
#include <mach/clk.h>
#include <linux/spinlock.h>
#include <linux/fs.h>
#include <linux/debugfs.h>

View File

@ -17,13 +17,6 @@
*
*/
#include <linux/workqueue.h>
#include <linux/completion.h>
#include <linux/cpu.h>
#include <linux/cpumask.h>
#include <linux/sched.h>
#include <linux/suspend.h>
#include <linux/cpufreq.h>
#include <linux/earlysuspend.h>
#include <linux/init.h>

View File

@ -138,14 +138,13 @@ dmov_exec_cmdptr_complete_func(struct msm_dmov_cmd *_cmd,
complete(&cmd->complete);
}
int msm_dmov_exec_cmd(unsigned id, unsigned int crci_mask, unsigned int cmdptr)
int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr)
{
struct msm_dmov_exec_cmdptr_cmd cmd;
PRINT_FLOW("dmov_exec_cmdptr(%d, %x)\n", id, cmdptr);
cmd.dmov_cmd.cmdptr = cmdptr;
cmd.dmov_cmd.crci_mask = crci_mask;
cmd.dmov_cmd.complete_func = dmov_exec_cmdptr_complete_func;
cmd.dmov_cmd.execute_func = NULL;
cmd.id = id;

View File

@ -368,12 +368,12 @@ static void remove_35mm_do_work(struct work_struct *work)
if (hi->usb_dev_type == USB_HEADSET) {
hi->usb_dev_status = STATUS_CONNECTED_ENABLED;
state &= ~BIT_HEADSET;
state &= ~(BIT_35MM_HEADSET | BIT_HEADSET);
state |= BIT_HEADSET_NO_MIC;
switch_set_state(&hi->sdev, state);
mutex_unlock(&hi->mutex_lock);
} else if (hi->usb_dev_type == H2W_TVOUT) {
state &= ~BIT_HEADSET;
state &= ~(BIT_HEADSET | BIT_35MM_HEADSET);
state |= BIT_HEADSET_NO_MIC;
switch_set_state(&hi->sdev, state);
#if 0
@ -384,7 +384,7 @@ static void remove_35mm_do_work(struct work_struct *work)
HS_DELAY_ZERO_JIFFIES);
#endif
} else {
state &= ~(BIT_HEADSET | BIT_HEADSET_NO_MIC);
state &= ~(BIT_HEADSET | BIT_HEADSET_NO_MIC | BIT_35MM_HEADSET);
switch_set_state(&hi->sdev, state);
}
@ -446,7 +446,7 @@ static void insert_35mm_do_work(struct work_struct *work)
state |= BIT_HEADSET;
printk(KERN_INFO "3.5mm_headset with microphone\n");
}
state |= BIT_35MM_HEADSET;
switch_set_state(&hi->sdev, state);
if (state & BIT_HEADSET_NO_MIC)
hi->ext_35mm_status = HTC_35MM_NO_MIC;

View File

@ -1,29 +0,0 @@
/* board-htcleo-mmc.h
*
* Copyright (C) 2011 marc1706
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef HTCLEO_AUDIO_H
#define HTCLEO_AUDIO_H
void htcleo_headset_enable(int en);
void htcleo_speaker_enable(int en);
void htcleo_receiver_enable(int en);
void htcleo_bt_sco_enable(int en);
void htcleo_mic_enable(int en);
void htcleo_analog_init(void);
int htcleo_get_rx_vol(uint8_t hw, int level);
void __init htcleo_audio_init(void);
#endif // HTCLEO_AUDIO_H

View File

@ -1,27 +0,0 @@
/* arch/arm/mach-msm/include/mach/board-htcleo-mac.h
*
* Copyright (C) 2012 Marc Alexander.
* Author: Marc Alexander<admin@m-a-styles.de>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ARCH_ARM_MACH_MSM_BOARD_HTCLEO_MAC_H
#define __ARCH_ARM_MACH_MSM_BOARD_HTCLEO_MAC_H
#define NVS_MACADDR_SIZE 0x1AU
extern char nvs_mac_addr[NVS_MACADDR_SIZE];
#define BDADDR_STR_SIZE 18
extern char bdaddr[BDADDR_STR_SIZE]; /* AOSP style */
#endif

View File

@ -136,14 +136,4 @@ struct microp_i2c_client_data {
int microp_i2c_read(uint8_t addr, uint8_t *data, int length);
int microp_i2c_write(uint8_t addr, uint8_t *data, int length);
int capella_cm3602_power(int pwr_device, uint8_t enable);
int microp_read_gpo_status(uint16_t *status);
int microp_gpo_enable(uint16_t gpo_mask);
int microp_gpo_disable(uint16_t gpo_mask);
#ifdef CONFIG_HAS_EARLYSUSPEND
void microp_early_suspend(struct early_suspend *h);
void microp_early_resume(struct early_suspend *h);
#endif // CONFIG_HAS_EARLYSUSPEND
#endif

View File

@ -1,31 +0,0 @@
/* board-htcleo-mmc.h
*
* Copyright (C) 2011 marc1706
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef HTCLEO_MMC_H
#define HTCLEO_MMC_H
static bool opt_disable_sdcard;
static void (*wifi_status_cb)(int card_present, void *dev_id);
static void *wifi_status_cb_devid;
static int htcleo_wifi_power_state;
static int htcleo_wifi_reset_state;
int htcleo_wifi_set_carddetect(int val);
int htcleo_wifi_power(int on);
int htcleo_wifi_reset(int on);
int __init htcleo_init_mmc(unsigned debug_uart);
#endif // HTCLEO_MMC_H

View File

@ -178,18 +178,6 @@ enum {
BOOTMODE_OFFMODE_CHARGING = 0x5,
};
void msm_hsusb_set_vbus_state(int online);
enum usb_connect_type {
CONNECT_TYPE_CLEAR = -2,
CONNECT_TYPE_UNKNOWN = -1,
CONNECT_TYPE_NONE = 0,
CONNECT_TYPE_USB,
CONNECT_TYPE_AC,
CONNECT_TYPE_9V_AC,
CONNECT_TYPE_WIRELESS,
CONNECT_TYPE_INTERNAL,
};
#define MSM_MAX_DEC_CNT 14
/* 7k target ADSP information */
/* Bit 23:0, for codec identification like mp3, wav etc *

View File

@ -32,7 +32,6 @@
#define NUM_AUTOFOCUS_MULTI_WINDOW_GRIDS 16
#define NUM_STAT_OUTPUT_BUFFERS 3
#define NUM_AF_STAT_OUTPUT_BUFFERS 3
#define max_control_command_size 150
enum msm_queue {
MSM_CAM_Q_CTRL, /* control command or control command status */

View File

@ -51,7 +51,4 @@ int clk_set_max_rate(struct clk *clk, unsigned long rate);
int clk_reset(struct clk *clk, enum clk_reset_action action);
int clk_set_flags(struct clk *clk, unsigned long flags);
unsigned long acpuclk_get_max_axi_rate(void);
#endif

View File

@ -1,47 +0,0 @@
/* Copyright (c) 2009, Code Aurora Forum. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of Code Aurora Forum, Inc. nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
* IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#ifndef __ARCH_ARM_MACH_MSM_DEBUG_MM_H_
#define __ARCH_ARM_MACH_MSM_DEBUG_MM_H_
/* The below macro removes the directory path name and retains only the
* file name to avoid long path names in log messages that comes as
* part of __FILE__ to compiler.
*/
#define __MM_FILE__ strrchr(__FILE__, '/') ? (strrchr(__FILE__, '/')+1) : \
__FILE__
#define MM_DBG(fmt, args...) pr_debug("[%s] " fmt,\
__func__, ##args)
#define MM_INFO(fmt, args...) pr_info("[%s:%s] " fmt,\
__MM_FILE__, __func__, ##args)
#define MM_ERR(fmt, args...) pr_err("[%s:%s] " fmt,\
__MM_FILE__, __func__, ##args)
#endif /* __ARCH_ARM_MACH_MSM_DEBUG_MM_H_ */

View File

@ -27,7 +27,6 @@ struct msm_dmov_errdata {
struct msm_dmov_cmd {
struct list_head list;
unsigned int cmdptr;
unsigned int crci_mask;
void (*complete_func)(struct msm_dmov_cmd *cmd,
unsigned int result,
struct msm_dmov_errdata *err);
@ -39,7 +38,7 @@ void msm_dmov_enqueue_cmd(unsigned id, struct msm_dmov_cmd *cmd);
void msm_dmov_enqueue_cmd_ext(unsigned id, struct msm_dmov_cmd *cmd);
void msm_dmov_stop_cmd(unsigned id, struct msm_dmov_cmd *cmd, int graceful);
void msm_dmov_flush(unsigned int id);
int msm_dmov_exec_cmd(unsigned id, unsigned int crci_mask, unsigned int cmdptr);
int msm_dmov_exec_cmd(unsigned id, unsigned int cmdptr);

View File

@ -1,63 +0,0 @@
/* Copyright (c) 2009, Code Aurora Forum. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of Code Aurora Forum, Inc. nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
* IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _INTERNAL_POWER_RAIL_H
#define _INTERNAL_POWER_RAIL_H
/* Clock power rail IDs */
#define PWR_RAIL_GRP_CLK 8
#define PWR_RAIL_GRP_2D_CLK 58
#define PWR_RAIL_MDP_CLK 14
#define PWR_RAIL_MFC_CLK 68
#define PWR_RAIL_ROTATOR_CLK 90
#define PWR_RAIL_VDC_CLK 39
#define PWR_RAIL_VFE_CLK 41
#define PWR_RAIL_VPE_CLK 76
enum rail_ctl_mode {
PWR_RAIL_CTL_AUTO = 0,
PWR_RAIL_CTL_MANUAL,
};
static inline int __maybe_unused internal_pwr_rail_ctl(unsigned rail_id,
bool enable)
{
/* Not yet implemented. */
return 0;
}
static inline int __maybe_unused internal_pwr_rail_mode(unsigned rail_id,
enum rail_ctl_mode mode)
{
/* Not yet implemented. */
return 0;
}
int internal_pwr_rail_ctl_auto(unsigned rail_id, bool enable);
#endif /* _INTERNAL_POWER_RAIL_H */

View File

@ -311,7 +311,6 @@
#define INT_MDDI_CLIENT INT_MDC
#define INT_NAND_WR_ER_DONE INT_EBI2_WR_ER_DONE
#define INT_NAND_OP_DONE INT_EBI2_OP_DONE
#define INT_GRAPHICS INT_GRP_3D
#define NR_SIRC_IRQS 0

View File

@ -1,7 +1,6 @@
/* arch/arm/mach-msm/include/mach/memory.h
*
* Copyright (C) 2007 Google, Inc.
* Copyright (c) 2009-2010, Code Aurora Forum. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@ -13,6 +12,7 @@
* GNU General Public License for more details.
*
*/
#ifndef __ASM_ARCH_MEMORY_H
#define __ASM_ARCH_MEMORY_H
@ -37,41 +37,28 @@
#define PHYS_OFFSET UL(0x10000000)
#endif
#define MAX_PHYSMEM_BITS 32
#define SECTION_SIZE_BITS 25
#define HAS_ARCH_IO_REMAP_PFN_RANGE
#define CONSISTENT_DMA_SIZE (4*SZ_1M)
#ifndef __ASSEMBLY__
void *alloc_bootmem_aligned(unsigned long size, unsigned long alignment);
unsigned long allocate_contiguous_ebi_nomap(unsigned long, unsigned long);
void clean_and_invalidate_caches(unsigned long, unsigned long, unsigned long);
void clean_caches(unsigned long, unsigned long, unsigned long);
void invalidate_caches(unsigned long, unsigned long, unsigned long);
int platform_physical_remove_pages(unsigned long, unsigned long);
int platform_physical_add_pages(unsigned long, unsigned long);
int platform_physical_low_power_pages(unsigned long, unsigned long);
#ifdef CONFIG_ARCH_MSM_ARM11
void write_to_strongly_ordered_memory(void);
void map_zero_page_strongly_ordered(void);
#include <asm/mach-types.h>
#ifdef CONFIG_ARCH_MSM7X27
#if defined(CONFIG_ARCH_MSM7227)
#define arch_barrier_extra() do \
{ \
write_to_strongly_ordered_memory(); \
} while (0)
#else
#define arch_barrier_extra() do \
{ if (machine_is_msm7x27_surf() || machine_is_msm7x27_ffa()) \
write_to_strongly_ordered_memory(); \
} while (0)
#endif
#define arch_barrier_extra() do {} while (0)
#endif
#ifdef CONFIG_CACHE_L2X0
@ -80,17 +67,12 @@ extern void l2x0_cache_flush_all(void);
#define finish_arch_switch(prev) do { l2x0_cache_sync(); } while (0)
#endif
#endif
#endif
#ifdef CONFIG_ARCH_MSM_SCORPION
#define arch_has_speculative_dfetch() 1
#define arch_has_speculative_dfetch() 1
#endif
#endif
/* these correspond to values known by the modem */
#define MEMORY_DEEP_POWERDOWN 0
#define MEMORY_SELF_REFRESH 1
#define MEMORY_ACTIVE 2
#define NPA_MEMORY_NODE_NAME "/mem/ebi1/cs1"

View File

@ -1,134 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, and the entire permission notice in its entirety,
* including the disclaimer of warranties.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote
* products derived from this software without specific prior
* written permission.
*
* ALTERNATIVELY, this product may be distributed under the terms of
* the GNU General Public License, version 2, in which case the provisions
* of the GPL version 2 are required INSTEAD OF the BSD license.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
* WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
* OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
* DAMAGE.
*/
#ifndef _ARCH_ARM_MACH_MSM_BUS_H
#define _ARCH_ARM_MACH_MSM_BUS_H
#include <linux/types.h>
#include <linux/input.h>
/*
* Macros for clients to convert their data to ib and ab
* Ws : Time window over which to transfer the data in SECONDS
* Bs : Size of the data block in bytes
* Per : Recurrence period
* Tb : Throughput bandwidth to prevent stalling
* R : Ratio of actual bandwidth used to Tb
* Ib : Instantaneous bandwidth
* Ab : Arbitrated bandwidth
*
* IB_RECURRBLOCK and AB_RECURRBLOCK:
* These are used if the requirement is to transfer a
* recurring block of data over a known time window.
*
* IB_THROUGHPUTBW and AB_THROUGHPUTBW:
* These are used for CPU style masters. Here the requirement
* is to have minimum throughput bandwidth available to avoid
* stalling.
*/
#define IB_RECURRBLOCK(Ws, Bs) ((Ws) == 0 ? 0 : ((Bs)/(Ws)))
#define AB_RECURRBLOCK(Ws, Per) ((Ws) == 0 ? 0 : ((Bs)/(Per)))
#define IB_THROUGHPUTBW(Tb) (Tb)
#define AB_THROUGHPUTBW(Tb, R) ((Tb) * (R))
struct msm_bus_vectors {
int src; /* Master */
int dst; /* Slave */
unsigned int ab; /* Arbitrated bandwidth */
unsigned int ib; /* Instantaneous bandwidth */
};
struct msm_bus_paths {
int num_paths;
struct msm_bus_vectors *vectors;
};
struct msm_bus_scale_pdata {
struct msm_bus_paths *usecase;
int num_usecases;
const char *name;
/*
* If the active_only flag is set to 1, the BW request is applied
* only when at least one CPU is active (powered on). If the flag
* is set to 0, then the BW request is always applied irrespective
* of the CPU state.
*/
unsigned int active_only;
};
/* Scaling APIs */
/*
* This function returns a handle to the client. This should be used to
* call msm_bus_scale_client_update_request.
* The function returns 0 if bus driver is unable to register a client
*/
#ifdef CONFIG_MSM_BUS_SCALING
uint32_t msm_bus_scale_register_client(struct msm_bus_scale_pdata *pdata);
int msm_bus_scale_client_update_request(uint32_t cl, unsigned int index);
void msm_bus_scale_unregister_client(uint32_t cl);
/* AXI Port configuration APIs */
int msm_bus_axi_porthalt(int master_port);
int msm_bus_axi_portunhalt(int master_port);
#else
static inline uint32_t
msm_bus_scale_register_client(struct msm_bus_scale_pdata *pdata)
{
return 1;
}
static inline int
msm_bus_scale_client_update_request(uint32_t cl, unsigned int index)
{
return 0;
}
static inline void
msm_bus_scale_unregister_client(uint32_t cl)
{
}
static inline int msm_bus_axi_porthalt(int master_port)
{
return 0;
}
static inline int msm_bus_axi_portunhalt(int master_port)
{
return 0;
}
#endif
#endif /*_ARCH_ARM_MACH_MSM_BUS_H*/

View File

@ -27,12 +27,10 @@ struct mddi_info;
#define MSM_MDP_OUT_IF_FMT_RGB888 2
/* mdp override operations */
#define MSM_MDP_PANEL_IGNORE_PIXEL_DATA (1 << 0)
#define MSM_MDP_PANEL_IGNORE_PIXEL_DATA (1 << 0)
#define MSM_MDP_PANEL_FLIP_UD (1 << 1)
#define MSM_MDP_PANEL_FLIP_LR (1 << 2)
#define MSM_MDP4_MDDI_DMA_SWITCH (1 << 3)
#define MSM_MDP_DMA_PACK_ALIGN_LSB (1 << 4)
#define MSM_MDP_RGB_PANEL_SELF_REFRESH (1 << 5)
/* mddi type */
#define MSM_MDP_MDDI_TYPE_I 0
@ -192,7 +190,6 @@ struct msm_lcdc_panel_ops {
int (*uninit)(struct msm_lcdc_panel_ops *);
int (*blank)(struct msm_lcdc_panel_ops *);
int (*unblank)(struct msm_lcdc_panel_ops *);
int (*shutdown)(struct msm_lcdc_panel_ops *);
};
struct msm_lcdc_platform_data {
@ -214,8 +211,6 @@ struct msm_tvenc_platform_data {
struct mdp_blit_req;
struct fb_info;
struct mdp_overlay;
struct msmfb_overlay_data;
struct mdp_device {
struct device dev;
void (*dma)(struct mdp_device *mdp, uint32_t addr,
@ -232,17 +227,14 @@ struct mdp_device {
int (*overlay_unset)(struct mdp_device *mdp, struct fb_info *fb,
int ndx);
int (*overlay_play)(struct mdp_device *mdp, struct fb_info *fb,
struct msmfb_overlay_data *req, struct file **p_src_file);
struct msmfb_overlay_data *req, struct file *p_src_file);
#endif
void (*set_grp_disp)(struct mdp_device *mdp, uint32_t disp_id);
void (*configure_dma)(struct mdp_device *mdp);
int (*check_output_format)(struct mdp_device *mdp, int bpp);
int (*set_output_format)(struct mdp_device *mdp, int bpp);
void (*set_panel_size)(struct mdp_device *mdp, int width, int height);
unsigned color_format;
unsigned overrides;
uint32_t width; /*panel width*/
uint32_t height; /*panel height*/
};
struct class_interface;

View File

@ -47,18 +47,8 @@ struct msm_hsusb_platform_data {
/* 1 : uart, 0 : usb */
void (*usb_uart_switch)(int);
void (*config_usb_id_gpios)(bool enable);
void (*usb_hub_enable)(bool);
void (*serial_debug_gpios)(int);
int (*china_ac_detect)(void);
void (*disable_usb_charger)(void);
/* val, reg pairs terminated by -1 */
int *phy_init_seq;
void (*change_phy_voltage)(int);
int (*ldo_init) (int init);
int (*ldo_enable) (int enable);
int (*rpc_connect)(int);
/* 1 : mhl, 0 : usb */
void (*usb_mhl_switch)(bool);
/* val, reg pairs terminated by -1 */
int *phy_init_seq;
#ifdef CONFIG_USB_FUNCTION
/* USB device descriptor fields */
@ -84,15 +74,10 @@ struct msm_hsusb_platform_data {
int num_products;
struct msm_hsusb_product *products;
#endif
char *serial_number;
int usb_id_pin_gpio;
int dock_pin_gpio;
int id_pin_irq;
bool enable_car_kit_detect;
__u8 accessory_detect;
bool dock_detect;
int ac_9v_gpio;
char *serial_number;
int usb_id_pin_gpio;
bool enable_car_kit_detect;
__u8 accessory_detect;
};
int usb_get_connect_type(void);

View File

@ -37,30 +37,11 @@
do { } while (0)
#endif /* VERBOSE */
#ifndef __LINUX_USB_COMPOSITE_H
#define ERROR(fmt,args...) \
xprintk(KERN_ERR , fmt , ## args)
#define INFO(fmt,args...) \
xprintk(KERN_INFO , fmt , ## args)
#endif
#define USB_ERR(fmt, args...) \
printk(KERN_ERR "[USB:ERR] " fmt, ## args)
#define USB_WARNING(fmt, args...) \
printk(KERN_WARNING "[USB] " fmt, ## args)
#define USB_INFO(fmt, args...) \
printk(KERN_INFO "[USB] " fmt, ## args)
#define USB_DEBUG(fmt, args...) \
printk(KERN_DEBUG "[USB] " fmt, ## args)
#define USBH_ERR(fmt, args...) \
printk(KERN_ERR "[USBH:ERR] " fmt, ## args)
#define USBH_WARNING(fmt, args...) \
printk(KERN_WARNING "[USBH] " fmt, ## args)
#define USBH_INFO(fmt, args...) \
printk(KERN_INFO "[USBH] " fmt, ## args)
#define USBH_DEBUG(fmt, args...) \
printk(KERN_DEBUG "[USBH] " fmt, ## args)
/*-------------------------------------------------------------------------*/
@ -70,12 +51,9 @@
#define USB_HWDEVICE (MSM_USB_BASE + 0x000C)
#define USB_HWTXBUF (MSM_USB_BASE + 0x0010)
#define USB_HWRXBUF (MSM_USB_BASE + 0x0014)
#define USB_AHB_BURST (MSM_USB_BASE + 0x0090)
#define USB_AHB_MODE (MSM_USB_BASE + 0x0098)
#define USB_AHBBURST (USB_AHB_BURST)
#define USB_AHBMODE (USB_AHB_MODE)
#define USB_AHBBURST (MSM_USB_BASE + 0x0090)
#define USB_AHBMODE (MSM_USB_BASE + 0x0098)
#define USB_SBUSCFG (MSM_USB_BASE + 0x0090)
#define USB_ROC_AHB_MODE (MSM_USB_BASE + 0x0090)
#define USB_CAPLENGTH (MSM_USB_BASE + 0x0100) /* 8 bit */
#define USB_HCIVERSION (MSM_USB_BASE + 0x0102) /* 16 bit */
@ -104,26 +82,12 @@
#define USB_ENDPTCTRL(n) (MSM_USB_BASE + 0x01C0 + (4 * (n)))
#define USBCMD_RESET 2
#define USBCMD_ATTACH 1
#define USBCMD_RS (1 << 0) /* run/stop bit */
#define USBCMD_ATDTW (1 << 14)
#define ASYNC_INTR_CTRL (1 << 29)
#define ULPI_STP_CTRL (1 << 30)
#define USBCMD_ITC(n) (n << 16)
#define USBCMD_ITC_MASK (0xFF << 16)
#define USBCMD_RESET 2
#define USBCMD_ATTACH 1
#define USBCMD_ATDTW (1 << 14)
#define USBMODE_DEVICE 2
#define USBMODE_HOST 3
/* Redefining SDIS bit as it defined incorrectly in ehci.h. */
#ifdef USBMODE_SDIS
#undef USBMODE_SDIS
#endif
#define USBMODE_SDIS (1 << 4) /* stream disable */
#define USBMODE_VBUS (1 << 5) /* vbus power select */
struct ept_queue_head {
unsigned config;
@ -174,7 +138,7 @@ struct ept_queue_item {
#define STS_NAKI (1 << 16) /* */
#define STS_SLI (1 << 8) /* R/WC - suspend state entered */
#define STS_SRI (1 << 7) /* R/WC - SOF recv'd */
#define STS_URI (1 << 6) /* R/WC - RESET recv'd */
#define STS_URI (1 << 6) /* R/WC - RESET recv'd - write to clear */
#define STS_FRI (1 << 3) /* R/WC - Frame List Rollover */
#define STS_PCI (1 << 2) /* R/WC - Port Change Detect */
#define STS_UEI (1 << 1) /* R/WC - USB Error */
@ -211,38 +175,6 @@ struct ept_queue_item {
#define CTRL_RXT_INT (3 << 2)
#define CTRL_RXT_EP_TYPE_SHIFT 2
#if defined(CONFIG_ARCH_MSM7X30) || defined(CONFIG_ARCH_MSM8X60)
#define ULPI_DIGOUT_CTRL 0X36
#define ULPI_CDR_AUTORESET (1 << 1)
#else
#define ULPI_DIGOUT_CTRL 0X31
#define ULPI_CDR_AUTORESET (1 << 5)
#endif
#define ULPI_FUNC_CTRL_CLR (0x06)
#define ULPI_IFC_CTRL_CLR (0x09)
#define ULPI_AMPLITUDE_MAX (0x0C)
#define ULPI_OTG_CTRL (0x0B)
#define ULPI_OTG_CTRL_CLR (0x0C)
#define ULPI_INT_RISE_CLR (0x0F)
#define ULPI_INT_FALL_CLR (0x12)
#define ULPI_DEBUG_REG (0x15)
#define ULPI_SCRATCH_REG (0x16)
#define ULPI_CONFIG_REG1 (0x30)
#define ULPI_CONFIG_REG2 (0X31)
#define ULPI_CONFIG_REG (0x31)
#define ULPI_CONFIG_REG3 (0X32)
#define ULPI_CHG_DETECT_REG (0x34)
#define ULPI_PRE_EMPHASIS_MASK (3 << 4)
#define ULPI_DRV_AMPL_MASK (3 << 2)
#define ULPI_ONCLOCK (1 << 6)
#define ULPI_FUNC_SUSPENDM (1 << 6)
#define ULPI_IDPU (1 << 0)
#define ULPI_HOST_DISCONNECT (1 << 0)
#define ULPI_VBUS_VALID (1 << 1)
#define ULPI_SE1_GATE (1 << 2)
#define ULPI_SESS_END (1 << 3)
#define ULPI_ID_GND (1 << 4)
#define ULPI_WAKEUP (1 << 31)
#define ULPI_RUN (1 << 30)
#define ULPI_WRITE (1 << 29)
@ -252,17 +184,12 @@ struct ept_queue_item {
#define ULPI_DATA(n) ((n) & 255)
#define ULPI_DATA_READ(n) (((n) >> 8) & 255)
/* control charger detection by ULPI or externally */
#define ULPI_EXTCHGCTRL_65NM (1 << 2)
#define ULPI_EXTCHGCTRL_180NM (1 << 3)
#define ULPI_DEBUG_REG (0x15)
#define ULPI_SCRATCH_REG (0x16)
/* charger detection power on control */
#define ULPI_CHGDETON (1 << 1)
#define ULPI_FUNC_CTRL_CLR (0x06)
#define ULPI_FUNC_SUSPENDM (1 << 6)
/* enable charger detection */
#define ULPI_CHGDETEN (1 << 0)
#define ULPI_CHGTYPE_65NM (1 << 3)
#define ULPI_CHGTYPE_180NM (1 << 4)
/* USB_PORTSC bits for determining port speed */
#define PORTSC_PSPD_FS (0 << 26)
@ -291,30 +218,6 @@ struct ept_queue_item {
#define PORTSC_FPR (1 << 6) /* R/W - State normal => suspend */
#define PORTSC_SUSP (1 << 7) /* Read - Port in suspend state */
#define PORTSC_LS (3 << 10) /* Read - Port's Line status */
#define PORTSC_PHCD (1 << 23) /* phy suspend mode */
#define PORTSC_CCS (1 << 0) /* current connect status */
#define PORTSC_PTS (3 << 30)
#define PORTSC_PTS_ULPI (2 << 30)
#define PORTSC_PTS_SERIAL (3 << 30)
#define PORTSC_PORT_SPEED_FULL 0x00000000
#define PORTSC_PORT_SPEED_LOW 0x04000000
#define PORTSC_PORT_SPEED_HIGH 0x08000000
#define PORTSC_PORT_SPEED_MASK 0x0c000000
#define SBUSCFG_AHBBRST_INCR4 0x01
#define ULPI_USBINTR_ENABLE_FALLING_S 0x11
#define ULPI_USBINTR_ENABLE_FALLING_C 0x12
#define ULPI_USBINTR_STATUS 0x13
#define ULPI_USBINTR_ENABLE_RASING_S 0x0E
#define ULPI_USBINTR_ENABLE_RASING_C 0x0F
#define ULPI_SESSION_END_RAISE (1 << 3)
#define ULPI_SESSION_END_FALL (1 << 3)
#define ULPI_SESSION_VALID_RAISE (1 << 2)
#define ULPI_SESSION_VALID_FALL (1 << 2)
#define ULPI_VBUS_VALID_RAISE (1 << 1)
#define ULPI_VBUS_VALID_FALL (1 << 1)
#define PORTSC_PHCD (1 << 23) /* phy suspend mode */
#define PORTSC_CCS (1 << 0) /* current connect status */
#define PORTSC_PTS (3 << 30)
@ -335,9 +238,6 @@ struct ept_queue_item {
#define PORTSC_PTC_SE0_NAK (0x03 << 16)
#define PORTSC_PTC_TST_PKT (0x04 << 16)
#define USBH (1 << 15)
#define USB_PHY (1 << 18)
#define PORTSC_PTS_MASK (3 << 30)
#define PORTSC_PTS_ULPI (2 << 30)
#define PORTSC_PTS_SERIAL (3 << 30)
@ -350,9 +250,5 @@ struct ept_queue_item {
#define PORTSC_PHCD (1 << 23) /* phy suspend mode */
#define ULPI_DEBUG 0x15
#define ULPI_CLOCK_SUSPENDM (1 << 3)
#define ULPI_SUSPENDM (1 << 6)
#define ULPI_CALIB_STS (1 << 7)
#define ULPI_CALIB_VAL(x) (x & 0x7C)
#endif /* __LINUX_USB_GADGET_MSM72K_UDC_H__ */
#endif /* _USB_FUNCTION_MSM_HSUSB_HW_H */

View File

@ -1,64 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
/* The MSM Hardware supports multiple flavors of physical memory.
* This file captures hardware specific information of these types.
*/
#ifndef __ASM_ARCH_MSM_MEMTYPES_H
#define __ASM_ARCH_MSM_MEMTYPES_H
#include <mach/memory.h>
#include <linux/init.h>
int __init meminfo_init(unsigned int, unsigned int);
/* Redundant check to prevent this from being included outside of 7x30 */
#if defined(CONFIG_ARCH_MSM7X30)
unsigned int get_num_populated_chipselects(void);
#endif
unsigned int get_num_memory_banks(void);
unsigned int get_memory_bank_size(unsigned int);
unsigned int get_memory_bank_start(unsigned int);
int soc_change_memory_power(u64, u64, int);
enum {
MEMTYPE_NONE = -1,
MEMTYPE_SMI_KERNEL = 0,
MEMTYPE_SMI,
MEMTYPE_EBI0,
MEMTYPE_EBI1,
MEMTYPE_MAX,
};
void msm_reserve(void);
#define MEMTYPE_FLAGS_FIXED 0x1
#define MEMTYPE_FLAGS_1M_ALIGN 0x2
struct memtype_reserve {
unsigned long start;
unsigned long size;
unsigned long limit;
int flags;
};
struct reserve_info {
struct memtype_reserve *memtype_reserve_table;
void (*calculate_reserve_sizes)(void);
int (*paddr_to_memtype)(unsigned int);
unsigned long low_unstable_address;
unsigned long max_unstable_size;
unsigned long bank_size;
};
extern struct reserve_info *reserve_info;
#endif

View File

@ -132,8 +132,6 @@ uint32_t msm_rpc_get_vers(struct msm_rpc_endpoint *ept);
/* check if server version can handle client requested version */
int msm_rpc_is_compatible_version(uint32_t server_version,
uint32_t client_version);
struct msm_rpc_endpoint *msm_rpc_connect_compatible(uint32_t prog,
uint32_t vers, unsigned flags);
int msm_rpc_close(struct msm_rpc_endpoint *ept);
int msm_rpc_write(struct msm_rpc_endpoint *ept,
@ -166,7 +164,7 @@ struct msm_rpc_xdr {
void *in_buf;
uint32_t in_size;
uint32_t in_index;
wait_queue_head_t in_buf_wait_q;
struct mutex in_lock;
void *out_buf;
uint32_t out_size;
@ -176,22 +174,6 @@ struct msm_rpc_xdr {
struct msm_rpc_endpoint *ept;
};
int xdr_send_int8(struct msm_rpc_xdr *xdr, const int8_t *value);
int xdr_send_uint8(struct msm_rpc_xdr *xdr, const uint8_t *value);
int xdr_send_int16(struct msm_rpc_xdr *xdr, const int16_t *value);
int xdr_send_uint16(struct msm_rpc_xdr *xdr, const uint16_t *value);
int xdr_send_int32(struct msm_rpc_xdr *xdr, const int32_t *value);
int xdr_send_uint32(struct msm_rpc_xdr *xdr, const uint32_t *value);
int xdr_send_bytes(struct msm_rpc_xdr *xdr, const void **data, uint32_t *size);
int xdr_recv_int8(struct msm_rpc_xdr *xdr, int8_t *value);
int xdr_recv_uint8(struct msm_rpc_xdr *xdr, uint8_t *value);
int xdr_recv_int16(struct msm_rpc_xdr *xdr, int16_t *value);
int xdr_recv_uint16(struct msm_rpc_xdr *xdr, uint16_t *value);
int xdr_recv_int32(struct msm_rpc_xdr *xdr, int32_t *value);
int xdr_recv_uint32(struct msm_rpc_xdr *xdr, uint32_t *value);
int xdr_recv_bytes(struct msm_rpc_xdr *xdr, void **data, uint32_t *size);
struct msm_rpc_server
{
struct list_head list;

View File

@ -63,8 +63,6 @@ int smd_wait_until_writable(smd_channel_t *ch, int bytes);
#endif
int smd_wait_until_opened(smd_channel_t *ch, int timeout_us);
int smd_total_fifo_size(smd_channel_t *ch);
typedef enum
{
SMD_PORT_DS = 0,

View File

@ -43,19 +43,38 @@ static int msm_irq_debug_mask;
module_param_named(debug_mask, msm_irq_debug_mask, int, S_IRUGO | S_IWUSR | S_IWGRP);
#define VIC_REG(off) (MSM_VIC_BASE + (off))
#if defined(CONFIG_ARCH_MSM7X30)
#define VIC_INT_TO_REG_ADDR(base, irq) (base + (irq / 32) * 4)
#define VIC_INT_TO_REG_INDEX(irq) ((irq >> 5) & 3)
#else
#define VIC_INT_TO_REG_ADDR(base, irq) (base + ((irq & 32) ? 4 : 0))
#define VIC_INT_TO_REG_INDEX(irq) ((irq >> 5) & 1)
#endif
#define VIC_INT_SELECT0 VIC_REG(0x0000) /* 1: FIQ, 0: IRQ */
#define VIC_INT_SELECT1 VIC_REG(0x0004) /* 1: FIQ, 0: IRQ */
#define VIC_INT_SELECT2 VIC_REG(0x0008) /* 1: FIQ, 0: IRQ */
#define VIC_INT_SELECT3 VIC_REG(0x000C) /* 1: FIQ, 0: IRQ */
#define VIC_INT_EN0 VIC_REG(0x0010)
#define VIC_INT_EN1 VIC_REG(0x0014)
#define VIC_INT_EN2 VIC_REG(0x0018)
#define VIC_INT_EN3 VIC_REG(0x001C)
#define VIC_INT_ENCLEAR0 VIC_REG(0x0020)
#define VIC_INT_ENCLEAR1 VIC_REG(0x0024)
#define VIC_INT_ENCLEAR2 VIC_REG(0x0028)
#define VIC_INT_ENCLEAR3 VIC_REG(0x002C)
#define VIC_INT_ENSET0 VIC_REG(0x0030)
#define VIC_INT_ENSET1 VIC_REG(0x0034)
#define VIC_INT_ENSET2 VIC_REG(0x0038)
#define VIC_INT_ENSET3 VIC_REG(0x003C)
#define VIC_INT_TYPE0 VIC_REG(0x0040) /* 1: EDGE, 0: LEVEL */
#define VIC_INT_TYPE1 VIC_REG(0x0044) /* 1: EDGE, 0: LEVEL */
#define VIC_INT_TYPE2 VIC_REG(0x0048) /* 1: EDGE, 0: LEVEL */
#define VIC_INT_TYPE3 VIC_REG(0x004C) /* 1: EDGE, 0: LEVEL */
#define VIC_INT_POLARITY0 VIC_REG(0x0050) /* 1: NEG, 0: POS */
#define VIC_INT_POLARITY1 VIC_REG(0x0054) /* 1: NEG, 0: POS */
#define VIC_INT_POLARITY2 VIC_REG(0x0058) /* 1: NEG, 0: POS */
#define VIC_INT_POLARITY3 VIC_REG(0x005C) /* 1: NEG, 0: POS */
#define VIC_NO_PEND_VAL VIC_REG(0x0060)
#if defined(CONFIG_ARCH_MSM_SCORPION)
@ -69,14 +88,24 @@ module_param_named(debug_mask, msm_irq_debug_mask, int, S_IRUGO | S_IWUSR | S_IW
#endif
#define VIC_IRQ_STATUS0 VIC_REG(0x0080)
#define VIC_IRQ_STATUS1 VIC_REG(0x0084)
#define VIC_IRQ_STATUS2 VIC_REG(0x0088)
#define VIC_IRQ_STATUS3 VIC_REG(0x008C)
#define VIC_FIQ_STATUS0 VIC_REG(0x0090)
#define VIC_FIQ_STATUS1 VIC_REG(0x0094)
#define VIC_FIQ_STATUS2 VIC_REG(0x0098)
#define VIC_FIQ_STATUS3 VIC_REG(0x009C)
#define VIC_RAW_STATUS0 VIC_REG(0x00A0)
#define VIC_RAW_STATUS1 VIC_REG(0x00A4)
#define VIC_RAW_STATUS2 VIC_REG(0x00A8)
#define VIC_RAW_STATUS3 VIC_REG(0x00AC)
#define VIC_INT_CLEAR0 VIC_REG(0x00B0)
#define VIC_INT_CLEAR1 VIC_REG(0x00B4)
#define VIC_INT_CLEAR2 VIC_REG(0x00B8)
#define VIC_INT_CLEAR3 VIC_REG(0x00BC)
#define VIC_SOFTINT0 VIC_REG(0x00C0)
#define VIC_SOFTINT1 VIC_REG(0x00C4)
#define VIC_SOFTINT2 VIC_REG(0x00C8)
#define VIC_SOFTINT3 VIC_REG(0x00CC)
#define VIC_IRQ_VEC_RD VIC_REG(0x00D0) /* pending int # */
#define VIC_IRQ_VEC_PEND_RD VIC_REG(0x00D4) /* pending vector addr */
#define VIC_IRQ_VEC_WR VIC_REG(0x00D8)
@ -100,14 +129,40 @@ module_param_named(debug_mask, msm_irq_debug_mask, int, S_IRUGO | S_IWUSR | S_IW
#define VIC_VECTPRIORITY(n) VIC_REG(0x0200+((n) * 4))
#define VIC_VECTADDR(n) VIC_REG(0x0400+((n) * 4))
#if defined(CONFIG_ARCH_MSM7X30)
#define VIC_NUM_REGS 4
#else
#define VIC_NUM_REGS 2
#endif
#if VIC_NUM_REGS == 2
#define DPRINT_REGS(base_reg, format, ...) \
printk(KERN_INFO format " %x %x\n", ##__VA_ARGS__, \
readl(base_reg ## 0), readl(base_reg ## 1))
#define DPRINT_ARRAY(array, format, ...) \
printk(KERN_INFO format " %x %x\n", ##__VA_ARGS__, \
array[0], array[1])
#elif VIC_NUM_REGS == 4
#define DPRINT_REGS(base_reg, format, ...) \
printk(KERN_INFO format " %x %x %x %x\n", ##__VA_ARGS__, \
readl(base_reg ## 0), readl(base_reg ## 1), \
readl(base_reg ## 2), readl(base_reg ## 3))
#define DPRINT_ARRAY(array, format, ...) \
printk(KERN_INFO format " %x %x %x %x\n", ##__VA_ARGS__, \
array[0], array[1], \
array[2], array[3])
#else
#error "VIC_NUM_REGS set to illegal value"
#endif
static uint32_t msm_irq_smsm_wake_enable[2];
static struct {
uint32_t int_en[2];
uint32_t int_type;
uint32_t int_polarity;
uint32_t int_select;
} msm_irq_shadow_reg[2];
static uint32_t msm_irq_idle_disable[2];
} msm_irq_shadow_reg[VIC_NUM_REGS];
static uint32_t msm_irq_idle_disable[VIC_NUM_REGS];
#if defined(CONFIG_MSM_N_WAY_SMD)
#define INT_INFO_SMSM_ID SMEM_APPS_DEM_SLAVE_DATA
@ -143,7 +198,9 @@ static uint8_t msm_irq_to_smsm[NR_MSM_IRQS + NR_SIRC_IRQS] = {
[INT_UART1DM_IRQ] = 17,
[INT_UART1DM_RX] = 18,
[INT_KEYSENSE] = 19,
#if !defined(CONFIG_ARCH_MSM7X30)
[INT_AD_HSSD] = 20,
#endif
[INT_NAND_WR_ER_DONE] = 21,
[INT_NAND_OP_DONE] = 22,
@ -169,23 +226,31 @@ static uint8_t msm_irq_to_smsm[NR_MSM_IRQS + NR_SIRC_IRQS] = {
[INT_GP_TIMER_EXP] = SMSM_FAKE_IRQ,
[INT_DEBUG_TIMER_EXP] = SMSM_FAKE_IRQ,
[INT_ADSP_A11] = SMSM_FAKE_IRQ,
#ifdef CONFIG_ARCH_MSM_SCORPION
#ifdef CONFIG_ARCH_QSD8X50
[INT_SIRC_0] = SMSM_FAKE_IRQ,
[INT_SIRC_1] = SMSM_FAKE_IRQ,
#endif
};
static inline void msm_irq_write_all_regs(void __iomem *base, unsigned int val)
{
int i;
/* the address must be continue */
for (i = 0; i < VIC_NUM_REGS; i++)
writel(val, base + (i * 4));
}
static void msm_irq_ack(unsigned int irq)
{
void __iomem *reg = VIC_INT_CLEAR0 + ((irq & 32) ? 4 : 0);
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_INT_CLEAR0, irq);
irq = 1 << (irq & 31);
writel(irq, reg);
}
static void msm_irq_mask(unsigned int irq)
{
void __iomem *reg = VIC_INT_ENCLEAR0 + ((irq & 32) ? 4 : 0);
unsigned index = (irq >> 5) & 1;
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_INT_ENCLEAR0, irq);
unsigned index = VIC_INT_TO_REG_INDEX(irq);
uint32_t mask = 1UL << (irq & 31);
int smsm_irq = msm_irq_to_smsm[irq];
@ -201,8 +266,8 @@ static void msm_irq_mask(unsigned int irq)
static void msm_irq_unmask(unsigned int irq)
{
void __iomem *reg = VIC_INT_ENSET0 + ((irq & 32) ? 4 : 0);
unsigned index = (irq >> 5) & 1;
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_INT_ENSET0, irq);
unsigned index = VIC_INT_TO_REG_INDEX(irq);
uint32_t mask = 1UL << (irq & 31);
int smsm_irq = msm_irq_to_smsm[irq];
@ -219,7 +284,7 @@ static void msm_irq_unmask(unsigned int irq)
static int msm_irq_set_wake(unsigned int irq, unsigned int on)
{
unsigned index = (irq >> 5) & 1;
unsigned index = VIC_INT_TO_REG_INDEX(irq);
uint32_t mask = 1UL << (irq & 31);
int smsm_irq = msm_irq_to_smsm[irq];
@ -245,9 +310,9 @@ static int msm_irq_set_wake(unsigned int irq, unsigned int on)
static int msm_irq_set_type(unsigned int irq, unsigned int flow_type)
{
void __iomem *treg = VIC_INT_TYPE0 + ((irq & 32) ? 4 : 0);
void __iomem *preg = VIC_INT_POLARITY0 + ((irq & 32) ? 4 : 0);
unsigned index = (irq >> 5) & 1;
void __iomem *treg = VIC_INT_TO_REG_ADDR(VIC_INT_TYPE0, irq);
void __iomem *preg = VIC_INT_TO_REG_ADDR(VIC_INT_POLARITY0, irq);
unsigned index = VIC_INT_TO_REG_INDEX(irq);
int b = 1 << (irq & 31);
uint32_t polarity;
uint32_t type;
@ -276,16 +341,24 @@ static int msm_irq_set_type(unsigned int irq, unsigned int flow_type)
int msm_irq_pending(void)
{
return readl(VIC_IRQ_STATUS0) || readl(VIC_IRQ_STATUS1);
int i, pending = 0;
/* the address must be continue */
for (i = 0; (i < VIC_NUM_REGS) && !pending; i++)
pending |= readl(VIC_IRQ_STATUS0 + (i * 4));
return pending;
}
int msm_irq_idle_sleep_allowed(void)
{
int i, disable = 0;
if (msm_irq_debug_mask & IRQ_DEBUG_SLEEP_REQUEST)
printk(KERN_INFO "msm_irq_idle_sleep_allowed: disable %x %x\n",
msm_irq_idle_disable[0], msm_irq_idle_disable[1]);
return !(msm_irq_idle_disable[0] || msm_irq_idle_disable[1] ||
!smsm_int_info);
DPRINT_ARRAY(msm_irq_idle_disable,
"msm_irq_idle_sleep_allowed: disable");
for (i = 0; i < VIC_NUM_REGS; i++)
disable |= msm_irq_idle_disable[i];
return !(disable || !smsm_int_info);
}
/* If arm9_wake is set: pass control to the other core.
@ -301,8 +374,8 @@ void msm_irq_enter_sleep1(bool arm9_wake, int from_idle)
int msm_irq_enter_sleep2(bool arm9_wake, int from_idle)
{
int limit = 10;
uint32_t pending0, pending1;
int i, limit = 10;
uint32_t pending[VIC_NUM_REGS];
if (from_idle && !arm9_wake)
return 0;
@ -311,23 +384,25 @@ int msm_irq_enter_sleep2(bool arm9_wake, int from_idle)
WARN_ON_ONCE(!arm9_wake && !from_idle);
if (msm_irq_debug_mask & IRQ_DEBUG_SLEEP)
printk(KERN_INFO "msm_irq_enter_sleep change irq, pend %x %x\n",
readl(VIC_IRQ_STATUS0), readl(VIC_IRQ_STATUS1));
pending0 = readl(VIC_IRQ_STATUS0);
pending1 = readl(VIC_IRQ_STATUS1);
pending0 &= msm_irq_shadow_reg[0].int_en[!from_idle];
DPRINT_REGS(VIC_IRQ_STATUS, "%s change irq, pend", __func__);
for (i = 0; i < VIC_NUM_REGS; i++) {
pending[i] = readl(VIC_IRQ_STATUS0 + (i * 4));
pending[i] &= msm_irq_shadow_reg[i].int_en[!from_idle];
}
/* Clear INT_A9_M2A_5 since requesting sleep triggers it */
pending0 &= ~(1U << INT_A9_M2A_5);
pending1 &= msm_irq_shadow_reg[1].int_en[!from_idle];
if (pending0 || pending1) {
if (msm_irq_debug_mask & IRQ_DEBUG_SLEEP_ABORT)
printk(KERN_INFO "msm_irq_enter_sleep2 abort %x %x\n",
pending0, pending1);
return -EAGAIN;
pending[0] &= ~(1U << INT_A9_M2A_5);
for (i = 0; i < VIC_NUM_REGS; i++) {
if (pending[i]) {
if (msm_irq_debug_mask & IRQ_DEBUG_SLEEP_ABORT)
DPRINT_ARRAY(pending, "%s abort",
__func__);
return -EAGAIN;
}
}
writel(0, VIC_INT_EN0);
writel(0, VIC_INT_EN1);
msm_irq_write_all_regs(VIC_INT_EN0, 0);
while (limit-- > 0) {
int pend_irq;
@ -345,8 +420,9 @@ int msm_irq_enter_sleep2(bool arm9_wake, int from_idle)
msm_irq_ack(INT_A9_M2A_6);
writel(1U << INT_A9_M2A_6, VIC_INT_ENSET0);
} else {
writel(msm_irq_shadow_reg[0].int_en[1], VIC_INT_ENSET0);
writel(msm_irq_shadow_reg[1].int_en[1], VIC_INT_ENSET1);
for (i = 0; i < VIC_NUM_REGS; i++)
writel(msm_irq_shadow_reg[i].int_en[1],
VIC_INT_ENSET0 + (i * 4));
}
return 0;
}
@ -357,7 +433,7 @@ void msm_irq_exit_sleep1(void)
msm_irq_ack(INT_A9_M2A_6);
msm_irq_ack(INT_PWB_I2C);
for (i = 0; i < 2; i++) {
for (i = 0; i < VIC_NUM_REGS; i++) {
writel(msm_irq_shadow_reg[i].int_type, VIC_INT_TYPE0 + i * 4);
writel(msm_irq_shadow_reg[i].int_polarity, VIC_INT_POLARITY0 + i * 4);
writel(msm_irq_shadow_reg[i].int_en[0], VIC_INT_EN0 + i * 4);
@ -451,20 +527,16 @@ void __init msm_init_irq(void)
unsigned n;
/* select level interrupts */
writel(0, VIC_INT_TYPE0);
writel(0, VIC_INT_TYPE1);
msm_irq_write_all_regs(VIC_INT_TYPE0, 0);
/* select highlevel interrupts */
writel(0, VIC_INT_POLARITY0);
writel(0, VIC_INT_POLARITY1);
msm_irq_write_all_regs(VIC_INT_POLARITY0, 0);
/* select IRQ for all INTs */
writel(0, VIC_INT_SELECT0);
writel(0, VIC_INT_SELECT1);
msm_irq_write_all_regs(VIC_INT_SELECT0, 0);
/* disable all INTs */
writel(0, VIC_INT_EN0);
writel(0, VIC_INT_EN1);
msm_irq_write_all_regs(VIC_INT_EN0, 0);
/* don't use 1136 vic */
writel(0, VIC_CONFIG);
@ -493,7 +565,7 @@ late_initcall(msm_init_irq_late);
#if defined(CONFIG_MSM_FIQ_SUPPORT)
void msm_trigger_irq(int irq)
{
void __iomem *reg = VIC_SOFTINT0 + ((irq & 32) ? 4 : 0);
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_SOFTINT0, irq);
uint32_t mask = 1UL << (irq & 31);
writel(mask, reg);
}
@ -516,8 +588,8 @@ void msm_fiq_disable(int irq)
static void _msm_fiq_select(int irq)
{
void __iomem *reg = VIC_INT_SELECT0 + ((irq & 32) ? 4 : 0);
unsigned index = (irq >> 5) & 1;
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_INT_SELECT0, irq);
unsigned index = VIC_INT_TO_REG_INDEX(irq);
uint32_t mask = 1UL << (irq & 31);
unsigned long flags;
@ -529,8 +601,8 @@ static void _msm_fiq_select(int irq)
static void _msm_fiq_unselect(int irq)
{
void __iomem *reg = VIC_INT_SELECT0 + ((irq & 32) ? 4 : 0);
unsigned index = (irq >> 5) & 1;
void __iomem *reg = VIC_INT_TO_REG_ADDR(VIC_INT_SELECT0, irq);
unsigned index = VIC_INT_TO_REG_INDEX(irq);
uint32_t mask = 1UL << (irq & 31);
unsigned long flags;

View File

@ -16,19 +16,10 @@
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/bootmem.h>
#include <linux/memory_alloc.h>
#include <linux/module.h>
#include <asm/pgtable.h>
#include <asm/io.h>
#include <asm/mach/map.h>
#include <asm/cacheflush.h>
#include <mach/msm_memtypes.h>
#include <linux/hardirq.h>
#if defined(CONFIG_MSM_NPA_REMOTE)
#include "npa_remote.h"
#include <linux/completion.h>
#include <linux/err.h>
#endif
int arch_io_remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, unsigned long size, pgprot_t prot)
@ -43,7 +34,7 @@ int arch_io_remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
void *zero_page_strongly_ordered;
void map_zero_page_strongly_ordered(void)
static void map_zero_page_strongly_ordered(void)
{
if (zero_page_strongly_ordered)
return;
@ -52,15 +43,12 @@ void map_zero_page_strongly_ordered(void)
ioremap_strongly_ordered(page_to_pfn(empty_zero_page)
<< PAGE_SHIFT, PAGE_SIZE);
}
EXPORT_SYMBOL(map_zero_page_strongly_ordered);
void write_to_strongly_ordered_memory(void)
{
map_zero_page_strongly_ordered();
*(int *)zero_page_strongly_ordered = 0;
}
EXPORT_SYMBOL(write_to_strongly_ordered_memory);
void flush_axi_bus_buffer(void)
{
__asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
@ -121,57 +109,3 @@ void invalidate_caches(unsigned long vstart,
flush_axi_bus_buffer();
}
void *alloc_bootmem_aligned(unsigned long size, unsigned long alignment)
{
void *unused_addr = NULL;
unsigned long addr, tmp_size, unused_size;
/* Allocate maximum size needed, see where it ends up.
* Then free it -- in this path there are no other allocators
* so we can depend on getting the same address back
* when we allocate a smaller piece that is aligned
* at the end (if necessary) and the piece we really want,
* then free the unused first piece.
*/
tmp_size = size + alignment - PAGE_SIZE;
addr = (unsigned long)alloc_bootmem(tmp_size);
free_bootmem(__pa(addr), tmp_size);
unused_size = alignment - (addr % alignment);
if (unused_size)
unused_addr = alloc_bootmem(unused_size);
addr = (unsigned long)alloc_bootmem(size);
if (unused_size)
free_bootmem(__pa(unused_addr), unused_size);
return (void *)addr;
}
int platform_physical_remove_pages(unsigned long start_pfn,
unsigned long nr_pages)
{
return 0;
}
int platform_physical_add_pages(unsigned long start_pfn,
unsigned long nr_pages)
{
return 0;
}
int platform_physical_low_power_pages(unsigned long start_pfn,
unsigned long nr_pages)
{
return 0;
}
unsigned long allocate_contiguous_ebi_nomap(unsigned long size,
unsigned long align)
{
return _allocate_contiguous_memory_nomap(size, MEMTYPE_EBI0,
align, __builtin_return_address(0));
}
EXPORT_SYMBOL(allocate_contiguous_ebi_nomap);

View File

@ -4,7 +4,6 @@
* bootloader.
*
* Copyright (C) 2007 Google, Inc.
* Copyright (c) 2008-2009, Code Aurora Forum. All rights reserved.
* Author: Brian Swetland <swetland@google.com>
*
* This software is licensed under the terms of the GNU General Public
@ -23,7 +22,7 @@
#include <linux/platform_device.h>
#include <asm/mach/flash.h>
#include <linux/io.h>
#include <asm/io.h>
#include <asm/setup.h>
@ -39,26 +38,47 @@
#define ATAG_MSM_PARTITION 0x4d534D70 /* MSMp */
struct msm_ptbl_entry {
struct msm_ptbl_entry
{
char name[16];
__u32 offset;
__u32 size;
__u32 flags;
};
#define MSM_MAX_PARTITIONS 8
#define MSM_MAX_PARTITIONS 11
static struct mtd_partition msm_nand_partitions[MSM_MAX_PARTITIONS];
static char msm_nand_names[MSM_MAX_PARTITIONS * 16];
extern struct flash_platform_data msm_nand_data;
int emmc_partition_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
struct mtd_partition *ptn = msm_nand_partitions;
char *p = page;
int i;
uint64_t offset;
uint64_t size;
p += sprintf(p, "dev: size erasesize name\n");
for (i = 0; i < MSM_MAX_PARTITIONS && ptn->name; i++, ptn++) {
offset = ptn->offset;
size = ptn->size;
p += sprintf(p, "mmcblk0p%llu: %08llx %08x \"%s\"\n", offset, size * 512, 512, ptn->name);
}
return p - page;
}
static int __init parse_tag_msm_partition(const struct tag *tag)
{
struct mtd_partition *ptn = msm_nand_partitions;
char *name = msm_nand_names;
struct msm_ptbl_entry *entry = (void *) &tag->u;
unsigned count, n;
unsigned have_kpanic = 0;
count = (tag->hdr.size - 2) /
(sizeof(struct msm_ptbl_entry) / sizeof(__u32));
@ -70,6 +90,9 @@ static int __init parse_tag_msm_partition(const struct tag *tag)
memcpy(name, entry->name, 15);
name[15] = 0;
if (!strcmp(name, "kpanic"))
have_kpanic = 1;
ptn->name = name;
ptn->offset = entry->offset;
ptn->size = entry->size;
@ -79,6 +102,42 @@ static int __init parse_tag_msm_partition(const struct tag *tag)
ptn++;
}
#ifdef CONFIG_VIRTUAL_KPANIC_PARTITION
if (!have_kpanic) {
int i;
uint64_t kpanic_off = 0;
if (count == MSM_MAX_PARTITIONS) {
printk("Cannot create virtual 'kpanic' partition\n");
goto out;
}
for (i = 0; i < count; i++) {
ptn = &msm_nand_partitions[i];
if (!strcmp(ptn->name, CONFIG_VIRTUAL_KPANIC_SRC)) {
ptn->size -= CONFIG_VIRTUAL_KPANIC_PSIZE;
kpanic_off = ptn->offset + ptn->size;
break;
}
}
if (i == count) {
printk(KERN_ERR "Partition %s not found\n",
CONFIG_VIRTUAL_KPANIC_SRC);
goto out;
}
ptn = &msm_nand_partitions[count];
ptn->name ="kpanic";
ptn->offset = kpanic_off;
ptn->size = CONFIG_VIRTUAL_KPANIC_PSIZE;
printk("Virtual mtd partition '%s' created @%llx (%llu)\n",
ptn->name, ptn->offset, ptn->size);
count++;
}
out:
#endif /* CONFIG_VIRTUAL_KPANIC_SRC */
msm_nand_data.nr_parts = count;
msm_nand_data.parts = msm_nand_partitions;

View File

@ -142,15 +142,13 @@ int msm_irq_idle_sleep_allowed(void);
int msm_irq_pending(void);
int clks_allow_tcxo_locked_debug(void);
extern int board_mfg_mode(void);
extern unsigned long * board_get_mfg_sleep_gpio_table(void);
extern char * board_get_mfg_sleep_gpio_table(void);
extern void gpio_set_diag_gpio_table(unsigned long * dwMFG_gpio_table);
extern void wait_rmt_final_call_back(int timeout);
#ifdef CONFIG_AXI_SCREEN_POLICY
static int axi_rate;
static int sleep_axi_rate;
static struct clk *axi_clk;
#endif
static uint32_t *msm_pm_reset_vector;
static uint32_t msm_pm_max_sleep_time;
@ -656,8 +654,8 @@ static int msm_wakeup_after; /* default, no wakeup by alarm */
static int msm_power_wakeup_after(const char *val, struct kernel_param *kp)
{
int ret;
//struct uart_port *port;
//struct msm_port *msm_port;
struct uart_port *port;
struct msm_port *msm_port;
ret = param_set_int(val, kp);
printk(KERN_INFO "+msm_power_wakeup_after, ret=%d\r\n", ret);
@ -683,7 +681,7 @@ static void msm_pm_power_off(void)
pmic_glb_power_down();
#ifdef CONFIG_MSM_RMT_STORAGE_SERVER
#if CONFIG_MSM_RMT_STORAGE_SERVER
printk(KERN_INFO "from %s\r\n", __func__);
wait_rmt_final_call_back(10);
printk(KERN_INFO "back %s\r\n", __func__);
@ -717,7 +715,7 @@ void msm_pm_flush_console(void)
}
#if defined(CONFIG_MACH_HTCLEO)
static void htcleo_save_reset_reason(void)
static void htcleo_save_reset_reason()
{
/* save restart_reason to be accesible in bootloader @ ramconsole - 0x1000*/
uint32_t *bootloader_reset_reason = ioremap(0x2FFB0000, PAGE_SIZE);
@ -730,7 +728,7 @@ static void htcleo_save_reset_reason(void)
}
#endif
static void msm_pm_restart(char str, const char *cmd)
static void msm_pm_restart(char str)
{
msm_pm_flush_console();
@ -744,7 +742,7 @@ static void msm_pm_restart(char str, const char *cmd)
else
msm_proc_comm(PCOM_RESET_CHIP, &restart_reason, 0);
#ifdef CONFIG_MSM_RMT_STORAGE_SERVER
#if CONFIG_MSM_RMT_STORAGE_SERVER
printk(KERN_INFO "from %s\r\n", __func__);
wait_rmt_final_call_back(10);
printk(KERN_INFO "back %s\r\n", __func__);
@ -860,7 +858,6 @@ void msm_pm_set_max_sleep_time(int64_t max_sleep_time_ns)
EXPORT_SYMBOL(msm_pm_set_max_sleep_time);
#ifdef CONFIG_EARLYSUSPEND
#ifdef CONFIG_AXI_SCREEN_POLICY
/* axi 128 screen on, 61mhz screen off */
static void axi_early_suspend(struct early_suspend *handler)
{
@ -880,9 +877,7 @@ static struct early_suspend axi_screen_suspend = {
.resume = axi_late_resume,
};
#endif
#endif
#ifdef CONFIG_AXI_SCREEN_POLICY
static void __init msm_pm_axi_init(void)
{
#ifdef CONFIG_EARLYSUSPEND
@ -900,18 +895,19 @@ static void __init msm_pm_axi_init(void)
axi_rate = 0;
#endif
}
#endif
static int __init msm_pm_init(void)
{
pm_power_off = msm_pm_power_off;
arm_pm_restart = msm_pm_restart;
msm_pm_max_sleep_time = 0;
#if defined(CONFIG_ARCH_MSM_SCORPION)
#ifdef CONFIG_AXI_SCREEN_POLICY
msm_pm_axi_init();
#endif
register_reboot_notifier(&msm_reboot_notifier);
#endif
register_reboot_notifier(&msm_reboot_notifier);
msm_pm_reset_vector = ioremap(0x0, PAGE_SIZE);
#if defined(CONFIG_MACH_HTCLEO)

View File

@ -1,7 +1,6 @@
/* arch/arm/mach-msm/proc_comm.c
*
* Copyright (C) 2007-2008 Google, Inc.
* Copyright (c) 2009-2010, Code Aurora Forum. All rights reserved.
* Author: Brian Swetland <swetland@google.com>
*
* This software is licensed under the terms of the GNU General Public
@ -19,24 +18,24 @@
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/spinlock.h>
#include <linux/module.h>
#include <mach/msm_iomap.h>
#include <mach/system.h>
#include "proc_comm.h"
#include "smd_private.h"
#if defined(CONFIG_ARCH_MSM7X30)
#define MSM_TRIG_A2M_PC_INT (writel(1 << 6, MSM_GCC_BASE + 0x8))
#elif defined(CONFIG_ARCH_MSM8X60)
#define MSM_TRIG_A2M_PC_INT (writel(1 << 5, MSM_GCC_BASE + 0x8))
#else
#define MSM_TRIG_A2M_PC_INT (writel(1, MSM_CSR_BASE + 0x400 + (6) * 4))
#define MSM_TRIG_A2M_INT(n) (writel(1 << n, MSM_GCC_BASE + 0x8))
#endif
#define MSM_A2M_INT(n) (MSM_CSR_BASE + 0x400 + (n) * 4)
static inline void notify_other_proc_comm(void)
{
MSM_TRIG_A2M_PC_INT;
#if defined(CONFIG_ARCH_MSM7X30)
MSM_TRIG_A2M_INT(6);
#else
writel(1, MSM_A2M_INT(6));
#endif
}
#define APP_COMMAND 0x00
@ -51,84 +50,69 @@ static inline void notify_other_proc_comm(void)
static DEFINE_SPINLOCK(proc_comm_lock);
/* The higher level SMD support will install this to
* provide a way to check for and handle modem restart.
*/
int (*msm_check_for_modem_crash)(void);
/* Poll for a state change, checking for possible
* modem crashes along the way (so we don't wait
* forever while the ARM9 is blowing up.
* forever while the ARM9 is blowing up).
*
* Return an error in the event of a modem crash and
* restart so the msm_proc_comm() routine can restart
* the operation from the beginning.
*/
static int proc_comm_wait_for(unsigned addr, unsigned value)
static int proc_comm_wait_for(void __iomem *addr, unsigned value)
{
while (1) {
for (;;) {
if (readl(addr) == value)
return 0;
if (smsm_check_for_modem_crash())
return -EAGAIN;
udelay(5);
if (msm_check_for_modem_crash)
if (msm_check_for_modem_crash())
return -EAGAIN;
}
}
void msm_proc_comm_reset_modem_now(void)
{
unsigned base = (unsigned)MSM_SHARED_RAM_BASE;
unsigned long flags;
spin_lock_irqsave(&proc_comm_lock, flags);
again:
if (proc_comm_wait_for(base + MDM_STATUS, PCOM_READY))
goto again;
writel(PCOM_RESET_MODEM, base + APP_COMMAND);
writel(0, base + APP_DATA1);
writel(0, base + APP_DATA2);
spin_unlock_irqrestore(&proc_comm_lock, flags);
notify_other_proc_comm();
return;
}
EXPORT_SYMBOL(msm_proc_comm_reset_modem_now);
int msm_proc_comm(unsigned cmd, unsigned *data1, unsigned *data2)
{
unsigned base = (unsigned)MSM_SHARED_RAM_BASE;
void __iomem *base = MSM_SHARED_RAM_BASE;
unsigned long flags;
int ret;
spin_lock_irqsave(&proc_comm_lock, flags);
again:
if (proc_comm_wait_for(base + MDM_STATUS, PCOM_READY))
goto again;
for (;;) {
if (proc_comm_wait_for(base + MDM_STATUS, PCOM_READY))
continue;
writel(cmd, base + APP_COMMAND);
writel(data1 ? *data1 : 0, base + APP_DATA1);
writel(data2 ? *data2 : 0, base + APP_DATA2);
writel(cmd, base + APP_COMMAND);
writel(data1 ? *data1 : 0, base + APP_DATA1);
writel(data2 ? *data2 : 0, base + APP_DATA2);
notify_other_proc_comm();
notify_other_proc_comm();
if (proc_comm_wait_for(base + APP_COMMAND, PCOM_CMD_DONE))
goto again;
if (proc_comm_wait_for(base + APP_COMMAND, PCOM_CMD_DONE))
continue;
if (readl(base + APP_STATUS) == PCOM_CMD_SUCCESS) {
if (data1)
*data1 = readl(base + APP_DATA1);
if (data2)
*data2 = readl(base + APP_DATA2);
ret = 0;
} else {
ret = -EIO;
if (readl(base + APP_STATUS) != PCOM_CMD_FAIL) {
if (data1)
*data1 = readl(base + APP_DATA1);
if (data2)
*data2 = readl(base + APP_DATA2);
ret = 0;
} else {
ret = -EIO;
}
break;
}
writel(PCOM_CMD_IDLE, base + APP_COMMAND);
spin_unlock_irqrestore(&proc_comm_lock, flags);
return ret;
}
EXPORT_SYMBOL(msm_proc_comm);

View File

@ -1,6 +1,6 @@
/* arch/arm/mach-msm/proc_comm.h
*
* Copyright (c) 2007-2009, Code Aurora Forum. All rights reserved.
* Copyright (c) 2007 QUALCOMM Incorporated
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@ -179,18 +179,7 @@ enum {
PCOM_CLKCTL_RPC_RAIL_DISABLE,
PCOM_CLKCTL_RPC_RAIL_CONTROL,
PCOM_CLKCTL_RPC_MIN_MSMC1,
PCOM_CLKCTL_RPC_SRC_REQUEST,
PCOM_NPA_INIT,
PCOM_NPA_ISSUE_REQUIRED_REQUEST,
};
enum {
PCOM_OEM_FIRST_CMD = 0x10000000,
PCOM_OEM_TEST_CMD = PCOM_OEM_FIRST_CMD,
/* add OEM PROC COMM commands here */
PCOM_OEM_LAST = PCOM_OEM_TEST_CMD,
PCOM_NUM_CMDS,
};
enum {
@ -210,6 +199,7 @@ enum {
PCOM_CMD_FAIL_SMSM_NOT_INIT,
PCOM_CMD_FAIL_PROC_COMM_BUSY,
PCOM_CMD_FAIL_PROC_COMM_NOT_INIT,
};
/* List of VREGs that support the Pull Down Resistor setting. */
@ -304,7 +294,6 @@ enum {
(((pull) & 0x3) << 15) | \
(((drvstr) & 0xF) << 17))
void msm_proc_comm_reset_modem_now(void);
int msm_proc_comm(unsigned cmd, unsigned *data1, unsigned *data2);
#endif

View File

@ -14,7 +14,6 @@
*
*/
#include <linux/gpio.h>
#include <linux/module.h>
#include "devices.h"
#include "proc_comm.h"

View File

@ -1,6 +1,6 @@
/* linux/arch/arm/mach-msm/irq.c
*
* Copyright (c) 2009-2010 Code Aurora Forum. All rights reserved.
* Copyright (c) 2009 QUALCOMM Incorporated.
* Copyright (C) 2009 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
@ -188,12 +188,6 @@ static void sirc_irq_handler(unsigned int irq, struct irq_desc *desc)
(sirc_reg_table[reg].cascade_irq != irq))
reg++;
if (reg == ARRAY_SIZE(sirc_reg_table)) {
printk(KERN_ERR "%s: incorrect irq %d called\n",
__func__, irq);
return;
}
status = readl(sirc_reg_table[reg].int_status);
status &= SIRC_MASK;
if (status == 0)

View File

@ -16,20 +16,12 @@
#ifndef _ARCH_ARM_MACH_MSM_SIRC_H
#define _ARCH_ARM_MACH_MSM_SIRC_H
#ifdef CONFIG_ARCH_MSM_SCORPION
#ifdef CONFIG_ARCH_QSD8X50
void sirc_fiq_select(int irq, bool enable);
void __init msm_init_sirc(void);
#else
static inline void sirc_fiq_select(int irq, bool enable) {}
#endif
#ifdef CONFIG_ARCH_QSD8X50
void __init msm_init_sirc(void);
void msm_sirc_enter_sleep(void);
void msm_sirc_exit_sleep(void);
#else
static inline void __init msm_init_sirc(void) {}
static inline void msm_sirc_enter_sleep(void) { }
static inline void msm_sirc_exit_sleep(void) { }
#endif
#endif

View File

@ -140,18 +140,16 @@ static void handle_modem_crash(void)
;
}
extern int (*msm_check_for_modem_crash)(void);
uint32_t raw_smsm_get_state(enum smsm_state_item item)
{
return readl(smd_info.state + item * 4);
}
int smsm_check_for_modem_crash(void)
static int check_for_modem_crash(void)
{
/* if the modem's not ready yet, we have to hope for the best */
if (!smd_info.state)
return 0;
if (raw_smsm_get_state(SMSM_MODEM_STATE) & SMSM_RESET) {
if (raw_smsm_get_state(SMSM_STATE_MODEM) & SMSM_RESET) {
handle_modem_crash();
return -1;
}
@ -383,18 +381,17 @@ static void update_packet_state(struct smd_channel *ch)
int r;
/* can't do anything if we're in the middle of a packet */
while (ch->current_packet == 0) {
/* discard 0 length packets if any */
if (ch->current_packet != 0)
return;
/* don't bother unless we can get the full header */
if (smd_stream_read_avail(ch) < SMD_HEADER_SIZE)
return;
/* don't bother unless we can get the full header */
if (smd_stream_read_avail(ch) < SMD_HEADER_SIZE)
return;
r = ch_read(ch, hdr, SMD_HEADER_SIZE);
BUG_ON(r != SMD_HEADER_SIZE);
r = ch_read(ch, hdr, SMD_HEADER_SIZE);
BUG_ON(r != SMD_HEADER_SIZE);
ch->current_packet = hdr[0];
}
ch->current_packet = hdr[0];
}
/* provide a pointer and length to next free space in the fifo */
@ -493,7 +490,7 @@ static void handle_smd_irq(struct list_head *list, void (*notify)(void))
#ifdef CONFIG_BUILD_CIQ
/* put here to make sure we got the disable/enable index */
if (!msm_smd_ciq_info)
msm_smd_ciq_info = (*(volatile uint32_t *)(MSM_SHARED_RAM_BASE + SMD_CIQ_BASE));
msm_smd_ciq_info = (*(volatile uint32_t *)(MSM_SHARED_RAM_BASE + 0xFC11C));
#endif
spin_lock_irqsave(&smd_lock, flags);
list_for_each_entry(ch, list, ch_list) {
@ -644,8 +641,6 @@ static int smd_stream_write(smd_channel_t *ch, const void *_data, int len)
if (len < 0)
return -EINVAL;
else if (len == 0)
return 0;
while ((xfer = ch_write_buffer(ch, &ptr)) != 0) {
if (!ch_is_open(ch))
@ -916,7 +911,6 @@ int smd_close(smd_channel_t *ch)
return 0;
}
EXPORT_SYMBOL(smd_close);
int smd_read(smd_channel_t *ch, void *data, int len)
{
@ -928,7 +922,6 @@ int smd_write(smd_channel_t *ch, const void *data, int len)
{
return ch->write(ch, data, len);
}
EXPORT_SYMBOL(smd_write);
int smd_write_atomic(smd_channel_t *ch, const void *data, int len)
{
@ -951,7 +944,6 @@ int smd_write_avail(smd_channel_t *ch)
{
return ch->write_avail(ch);
}
EXPORT_SYMBOL_GPL(smd_write_avail);
int smd_wait_until_readable(smd_channel_t *ch, int bytes)
{
@ -989,11 +981,6 @@ int smd_cur_packet_size(smd_channel_t *ch)
}
EXPORT_SYMBOL(smd_cur_packet_size);
/* Returns SMD buffer size */
int smd_total_fifo_size(smd_channel_t *ch)
{
return ch->fifo_size;
}
/* ------------------------------------------------------------------------- */
@ -1240,6 +1227,8 @@ static int __init msm_smd_probe(struct platform_device *pdev)
do_smd_probe();
msm_check_for_modem_crash = check_for_modem_crash;
msm_init_last_radio_log(THIS_MODULE);
smd_initialized = 1;

View File

@ -1,7 +1,7 @@
/* arch/arm/mach-msm/smd_private.h
*
* Copyright (C) 2007 Google, Inc.
* Copyright (c) 2007-2010, Code Aurora Forum. All rights reserved.
* Copyright (c) 2007 QUALCOMM Incorporated
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@ -16,9 +16,6 @@
#ifndef _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_
#define _ARCH_ARM_MACH_MSM_MSM_SMD_PRIVATE_H_
#include <linux/types.h>
#include <linux/spinlock.h>
struct smem_heap_info {
unsigned initialized;
unsigned free_offset;
@ -49,15 +46,12 @@ struct smem_proc_comm {
#define VERSION_MODEM_SBL 7
#define VERSION_APPS 8
#define VERSION_MODEM 9
#define VERSION_DSPS 10
#define SMD_HEAP_SIZE 512
struct smem_shared {
struct smem_proc_comm proc_comm[4];
unsigned version[32];
struct smem_heap_info heap_info;
struct smem_heap_entry heap_toc[SMD_HEAP_SIZE];
struct smem_heap_entry heap_toc[512];
};
#define SMSM_V1_SIZE (sizeof(unsigned) * 8)
@ -95,70 +89,41 @@ struct smsm_interrupt_info {
};
#endif
#if defined(CONFIG_MSM_N_WAY_SMSM)
enum {
SMSM_APPS_STATE,
SMSM_MODEM_STATE,
SMSM_Q6_STATE,
SMSM_APPS_DEM,
SMSM_MODEM_DEM,
SMSM_Q6_DEM,
SMSM_POWER_MASTER_DEM,
SMSM_TIME_MASTER_DEM,
SMSM_NUM_ENTRIES,
};
#else
enum {
SMSM_APPS_STATE = 1,
SMSM_MODEM_STATE = 3,
SMSM_NUM_ENTRIES,
};
#endif
enum {
SMSM_APPS,
SMSM_MODEM,
SMSM_Q6,
SMSM_NUM_HOSTS,
};
#define SZ_DIAG_ERR_MSG 0xC8
#define ID_DIAG_ERR_MSG SMEM_DIAG_ERR_MESSAGE
#define ID_SMD_CHANNELS SMEM_SMD_BASE_ID
#define ID_SHARED_STATE SMEM_SMSM_SHARED_STATE
#define ID_CH_ALLOC_TBL SMEM_CHANNEL_ALLOC_TBL
#define SMSM_INIT 0x00000001
#define SMSM_OSENTERED 0x00000002
#define SMSM_SMDWAIT 0x00000004
#define SMSM_SMDINIT 0x00000008
#define SMSM_RPCWAIT 0x00000010
#define SMSM_RPCINIT 0x00000020
#define SMSM_RESET 0x00000040
#define SMSM_RSA 0x00000080
#define SMSM_RUN 0x00000100
#define SMSM_PWRC 0x00000200
#define SMSM_TIMEWAIT 0x00000400
#define SMSM_TIMEINIT 0x00000800
#define SMSM_PWRC_EARLY_EXIT 0x00001000
#define SMSM_WFPI 0x00002000
#define SMSM_SLEEP 0x00004000
#define SMSM_SLEEPEXIT 0x00008000
#define SMSM_OEMSBL_RELEASE 0x00010000
#define SMSM_APPS_REBOOT 0x00020000
#define SMSM_SYSTEM_POWER_DOWN 0x00040000
#define SMSM_SYSTEM_REBOOT 0x00080000
#define SMSM_SYSTEM_DOWNLOAD 0x00100000
#define SMSM_PWRC_SUSPEND 0x00200000
#define SMSM_APPS_SHUTDOWN 0x00400000
#define SMSM_SMD_LOOPBACK 0x00800000
#define SMSM_RUN_QUIET 0x01000000
#define SMSM_MODEM_WAIT 0x02000000
#define SMSM_MODEM_BREAK 0x04000000
#define SMSM_MODEM_CONTINUE 0x08000000
#define SMSM_SYSTEM_REBOOT_USR 0x20000000
#define SMSM_SYSTEM_PWRDWN_USR 0x40000000
#define SMSM_UNKNOWN 0x80000000
#define SMSM_INIT 0x00000001
#define SMSM_OSENTERED 0x00000002
#define SMSM_SMDWAIT 0x00000004
#define SMSM_SMDINIT 0x00000008
#define SMSM_RPCWAIT 0x00000010
#define SMSM_RPCINIT 0x00000020
#define SMSM_RESET 0x00000040
#define SMSM_RSA 0x00000080
#define SMSM_RUN 0x00000100
#define SMSM_PWRC 0x00000200
#define SMSM_TIMEWAIT 0x00000400
#define SMSM_TIMEINIT 0x00000800
#define SMSM_PWRC_EARLY_EXIT 0x00001000
#define SMSM_WFPI 0x00002000
#define SMSM_SLEEP 0x00004000
#define SMSM_SLEEPEXIT 0x00008000
#define SMSM_OEMSBL_RELEASE 0x00010000
#define SMSM_APPS_REBOOT 0x00020000
#define SMSM_SYSTEM_POWER_DOWN 0x00040000
#define SMSM_SYSTEM_REBOOT 0x00080000
#define SMSM_SYSTEM_DOWNLOAD 0x00100000
#define SMSM_PWRC_SUSPEND 0x00200000
#define SMSM_APPS_SHUTDOWN 0x00400000
#define SMSM_SMD_LOOPBACK 0x00800000
#define SMSM_RUN_QUIET 0x01000000
#define SMSM_MODEM_WAIT 0x02000000
#define SMSM_MODEM_BREAK 0x04000000
#define SMSM_MODEM_CONTINUE 0x08000000
#define SMSM_UNKNOWN 0x80000000
#define SMSM_WKUP_REASON_RPC 0x00000001
#define SMSM_WKUP_REASON_INT 0x00000002
@ -286,17 +251,18 @@ typedef enum {
} smem_mem_type;
#define SMD_SS_CLOSED 0x00000000
#define SMD_SS_OPENING 0x00000001
#define SMD_SS_OPENED 0x00000002
#define SMD_SS_FLUSHING 0x00000003
#define SMD_SS_CLOSING 0x00000004
#define SMD_SS_RESET 0x00000005
#define SMD_SS_RESET_OPENING 0x00000006
#define SMD_SS_CLOSED 0x00000000
#define SMD_SS_OPENING 0x00000001
#define SMD_SS_OPENED 0x00000002
#define SMD_SS_FLUSHING 0x00000003
#define SMD_SS_CLOSING 0x00000004
#define SMD_SS_RESET 0x00000005
#define SMD_SS_RESET_OPENING 0x00000006
#define SMD_BUF_SIZE 8192
#define SMD_CHANNELS 64
#define SMD_HEADER_SIZE 20
#define SMD_BUF_SIZE 8192
#define SMD_CHANNELS 64
#define SMD_HEADER_SIZE 20
#define SMD_TYPE_MASK 0x0FF
#define SMD_TYPE_APPS_MODEM 0x000
@ -308,8 +274,6 @@ typedef enum {
#define SMD_KIND_STREAM 0x100
#define SMD_KIND_PACKET 0x200
int smsm_check_for_modem_crash(void);
#define msm_check_for_modem_crash smsm_check_for_modem_crash
void *smem_find(unsigned id, unsigned size);
void *smem_item(unsigned id, unsigned *size);
uint32_t raw_smsm_get_state(enum smsm_state_item item);

View File

@ -22,7 +22,6 @@
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/cdev.h>
@ -106,7 +105,7 @@ static struct wake_lock rpcrouter_wake_lock;
static int rpcrouter_need_len;
static atomic_t next_xid = ATOMIC_INIT(1);
static atomic_t next_mid = ATOMIC_INIT(0);
static uint8_t next_pacmarkid;
static void do_read_data(struct work_struct *work);
static void do_create_pdevs(struct work_struct *work);
@ -115,16 +114,12 @@ static void do_create_rpcrouter_pdev(struct work_struct *work);
static DECLARE_WORK(work_read_data, do_read_data);
static DECLARE_WORK(work_create_pdevs, do_create_pdevs);
static DECLARE_WORK(work_create_rpcrouter_pdev, do_create_rpcrouter_pdev);
static atomic_t rpcrouter_pdev_created = ATOMIC_INIT(0);
#define RR_STATE_IDLE 0
#define RR_STATE_HEADER 1
#define RR_STATE_BODY 2
#define RR_STATE_ERROR 3
#define RMT_STORAGE_APIPROG_BE32 0xa7000030
#define RMT_STORAGE_SRV_APIPROG_BE32 0x9c000030
struct rr_context {
struct rr_packet *pkt;
uint8_t *ptr;
@ -267,7 +262,6 @@ struct msm_rpc_endpoint *msm_rpcrouter_create_local_endpoint(dev_t dev)
{
struct msm_rpc_endpoint *ept;
unsigned long flags;
int i;
ept = kmalloc(sizeof(struct msm_rpc_endpoint), GFP_KERNEL);
if (!ept)
@ -275,9 +269,7 @@ struct msm_rpc_endpoint *msm_rpcrouter_create_local_endpoint(dev_t dev)
memset(ept, 0, sizeof(struct msm_rpc_endpoint));
/* mark no reply outstanding */
ept->next_rroute = 0;
for (i = 0; i < MAX_REPLY_ROUTE; i++)
ept->rroute[i].pid = 0xffffffff;
ept->reply_pid = 0xffffffff;
ept->cid = (uint32_t) ept;
ept->pid = RPCROUTER_PID_LOCAL;
@ -538,8 +530,7 @@ static int process_control_msg(union rr_control_msg *msg, int len)
static void do_create_rpcrouter_pdev(struct work_struct *work)
{
if (atomic_cmpxchg(&rpcrouter_pdev_created, 0, 1) == 0)
platform_device_register(&rpcrouter_pdev);
platform_device_register(&rpcrouter_pdev);
}
static void do_create_pdevs(struct work_struct *work)
@ -661,13 +652,11 @@ static void do_read_data(struct work_struct *work)
hdr.size -= sizeof(pm);
frag = rr_malloc(sizeof(*frag));
frag = rr_malloc(hdr.size + sizeof(*frag));
frag->next = NULL;
frag->length = hdr.size;
if (rr_read(frag->data, hdr.size)) {
kfree(frag);
if (rr_read(frag->data, hdr.size))
goto fail_io;
}
ept = rpcrouter_lookup_local_endpoint(hdr.dst_cid);
if (!ept) {
@ -769,77 +758,19 @@ int msm_rpc_close(struct msm_rpc_endpoint *ept)
}
EXPORT_SYMBOL(msm_rpc_close);
static int msm_rpc_write_pkt(struct msm_rpc_endpoint *ept,
struct rr_remote_endpoint *r_ept,
struct rr_header *hdr,
uint32_t pacmark,
void *buffer, int count)
{
DEFINE_WAIT(__wait);
unsigned long flags;
int needed;
for (;;) {
prepare_to_wait(&r_ept->quota_wait, &__wait,
TASK_INTERRUPTIBLE);
spin_lock_irqsave(&r_ept->quota_lock, flags);
if (r_ept->tx_quota_cntr < RPCROUTER_DEFAULT_RX_QUOTA)
break;
if (signal_pending(current) &&
(!(ept->flags & MSM_RPC_UNINTERRUPTIBLE)))
break;
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
schedule();
}
finish_wait(&r_ept->quota_wait, &__wait);
if (signal_pending(current) &&
(!(ept->flags & MSM_RPC_UNINTERRUPTIBLE))) {
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
return -ERESTARTSYS;
}
r_ept->tx_quota_cntr++;
if (r_ept->tx_quota_cntr == RPCROUTER_DEFAULT_RX_QUOTA)
hdr->confirm_rx = 1;
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
spin_lock_irqsave(&smd_lock, flags);
needed = sizeof(*hdr) + hdr->size;
while (smd_write_avail(smd_channel) < needed) {
spin_unlock_irqrestore(&smd_lock, flags);
msleep(250);
spin_lock_irqsave(&smd_lock, flags);
}
/* TODO: deal with full fifo */
smd_write(smd_channel, hdr, sizeof(*hdr));
smd_write(smd_channel, &pacmark, sizeof(pacmark));
smd_write(smd_channel, buffer, count);
spin_unlock_irqrestore(&smd_lock, flags);
return 0;
}
int msm_rpc_write(struct msm_rpc_endpoint *ept, void *buffer, int count)
{
struct rr_header hdr;
uint32_t pacmark;
uint32_t mid;
struct rpc_request_hdr *rq = buffer;
struct rr_remote_endpoint *r_ept;
int ret;
int total;
unsigned long flags;
int needed;
DEFINE_WAIT(__wait);
if (((rq->prog&0xFFFFFFF0) == RMT_STORAGE_APIPROG_BE32) ||
((rq->prog&0xFFFFFFF0) == RMT_STORAGE_SRV_APIPROG_BE32)) {
printk(KERN_DEBUG
"rpc_write: prog = %x , procedure = %d, type = %d, xid = %d\n"
, be32_to_cpu(rq->prog), be32_to_cpu(rq->procedure)
, be32_to_cpu(rq->type), be32_to_cpu(rq->xid));
}
/* TODO: fragmentation for large outbound packets */
if (count > (RPCROUTER_MSGSIZE_MAX - sizeof(uint32_t)) || !count)
return -EINVAL;
/* snoop the RPC packet and enforce permissions */
@ -887,21 +818,23 @@ int msm_rpc_write(struct msm_rpc_endpoint *ept, void *buffer, int count)
} else {
/* RPC REPLY */
/* TODO: locking */
for (ret = 0; ret < MAX_REPLY_ROUTE; ret++)
if (ept->rroute[ret].xid == rq->xid) {
if (ept->rroute[ret].pid == 0xffffffff)
continue;
hdr.dst_pid = ept->rroute[ret].pid;
hdr.dst_cid = ept->rroute[ret].cid;
/* consume this reply */
ept->rroute[ret].pid = 0xffffffff;
goto found_rroute;
if (ept->reply_pid == 0xffffffff) {
printk(KERN_ERR
"rr_write: rejecting unexpected reply\n");
return -EINVAL;
}
if (ept->reply_xid != rq->xid) {
printk(KERN_ERR
"rr_write: rejecting packet w/ bad xid\n");
return -EINVAL;
}
printk(KERN_ERR "rr_write: rejecting packet w/ bad xid\n");
return -EINVAL;
hdr.dst_pid = ept->reply_pid;
hdr.dst_cid = ept->reply_cid;
/* consume this reply */
ept->reply_pid = 0xffffffff;
found_rroute:
IO("REPLY on ept %p to xid=%d @ %d:%08x (%d bytes)\n",
ept,
be32_to_cpu(rq->xid), hdr.dst_pid, hdr.dst_cid, count);
@ -921,36 +854,56 @@ found_rroute:
hdr.version = RPCROUTER_VERSION;
hdr.src_pid = ept->pid;
hdr.src_cid = ept->cid;
hdr.confirm_rx = 0;
hdr.size = count + sizeof(uint32_t);
total = count;
for (;;) {
prepare_to_wait(&r_ept->quota_wait, &__wait,
TASK_INTERRUPTIBLE);
spin_lock_irqsave(&r_ept->quota_lock, flags);
if (r_ept->tx_quota_cntr < RPCROUTER_DEFAULT_RX_QUOTA)
break;
if (signal_pending(current) &&
(!(ept->flags & MSM_RPC_UNINTERRUPTIBLE)))
break;
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
schedule();
}
finish_wait(&r_ept->quota_wait, &__wait);
mid = atomic_add_return(1, &next_mid) & 0xFF;
if (signal_pending(current) &&
(!(ept->flags & MSM_RPC_UNINTERRUPTIBLE))) {
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
return -ERESTARTSYS;
}
r_ept->tx_quota_cntr++;
if (r_ept->tx_quota_cntr == RPCROUTER_DEFAULT_RX_QUOTA)
hdr.confirm_rx = 1;
while (count > 0) {
unsigned xfer;
/* bump pacmark while interrupts disabled to avoid race
* probably should be atomic op instead
*/
pacmark = PACMARK(count, ++next_pacmarkid, 0, 1);
if (count > RPCROUTER_DATASIZE_MAX)
xfer = RPCROUTER_DATASIZE_MAX;
else
xfer = count;
spin_unlock_irqrestore(&r_ept->quota_lock, flags);
hdr.confirm_rx = 0;
hdr.size = xfer + sizeof(uint32_t);
spin_lock_irqsave(&smd_lock, flags);
/* total == count -> must be first packet
* xfer == count -> must be last packet
*/
pacmark = PACMARK(xfer, mid, (total == count), (xfer == count));
ret = msm_rpc_write_pkt(ept, r_ept, &hdr, pacmark, buffer, xfer);
if (ret < 0)
return ret;
buffer += xfer;
count -= xfer;
needed = sizeof(hdr) + hdr.size;
while (smd_write_avail(smd_channel) < needed) {
spin_unlock_irqrestore(&smd_lock, flags);
msleep(250);
spin_lock_irqsave(&smd_lock, flags);
}
return total;
/* TODO: deal with full fifo */
smd_write(smd_channel, &hdr, sizeof(hdr));
smd_write(smd_channel, &pacmark, sizeof(pacmark));
smd_write(smd_channel, buffer, count);
spin_unlock_irqrestore(&smd_lock, flags);
return count;
}
EXPORT_SYMBOL(msm_rpc_write);
@ -1151,30 +1104,20 @@ int __msm_rpc_read(struct msm_rpc_endpoint *ept,
*frag_ret = pkt->first;
rq = (void*) pkt->first->data;
if (((rq->prog&0xFFFFFFF0) == RMT_STORAGE_APIPROG_BE32) ||
((rq->prog&0xFFFFFFF0) == RMT_STORAGE_SRV_APIPROG_BE32)) {
printk(KERN_DEBUG
"rpc_read: prog = %x , procedure = %d, type = %d, xid = %d\n"
, be32_to_cpu(rq->prog), be32_to_cpu(rq->procedure)
, be32_to_cpu(rq->type), be32_to_cpu(rq->xid));
}
if ((rc >= (sizeof(uint32_t) * 3)) && (rq->type == 0)) {
IO("READ on ept %p is a CALL on %08x:%08x proc %d xid %d\n",
ept, be32_to_cpu(rq->prog), be32_to_cpu(rq->vers),
be32_to_cpu(rq->procedure),
be32_to_cpu(rq->xid));
/* RPC CALL */
if (ept->rroute[ept->next_rroute].pid != 0xffffffff) {
if (ept->reply_pid != 0xffffffff) {
printk(KERN_WARNING
"rr_read: lost previous reply xid...\n");
}
/* TODO: locking? */
ept->rroute[ept->next_rroute].pid = pkt->hdr.src_pid;
ept->rroute[ept->next_rroute].cid = pkt->hdr.src_cid;
ept->rroute[ept->next_rroute].xid = rq->xid;
ept->next_rroute = (ept->next_rroute + 1) & (MAX_REPLY_ROUTE - 1);
ept->reply_pid = pkt->hdr.src_pid;
ept->reply_cid = pkt->hdr.src_cid;
ept->reply_xid = rq->xid;
}
#if TRACE_RPC_MSG
else if ((rc >= (sizeof(uint32_t) * 3)) && (rq->type == 1))

View File

@ -32,7 +32,6 @@
#define RPCROUTER_VERSION 1
#define RPCROUTER_PROCESSORS_MAX 4
#define RPCROUTER_MSGSIZE_MAX 512
#define RPCROUTER_DATASIZE_MAX 500
#if defined(CONFIG_ARCH_MSM7X30)
#define RPCROUTER_PEND_REPLIES_MAX 32
#endif
@ -51,7 +50,6 @@
#define RPCROUTER_CTRL_CMD_REMOVE_CLIENT 6
#define RPCROUTER_CTRL_CMD_RESUME_TX 7
#define RPCROUTER_CTRL_CMD_EXIT 8
#define RPCROUTER_CTRL_CMD_PING 9
#define RPCROUTER_DEFAULT_RX_QUOTA 5
@ -143,15 +141,6 @@ struct rr_remote_endpoint {
struct list_head list;
};
struct msm_reply_route {
uint32_t xid;
uint32_t pid;
uint32_t cid;
uint32_t unused;
};
#define MAX_REPLY_ROUTE 4
#if defined(CONFIG_ARCH_MSM7X30)
struct msm_rpc_reply {
struct list_head list;
@ -194,12 +183,15 @@ struct msm_rpc_endpoint {
uint32_t dst_prog; /* be32 */
uint32_t dst_vers; /* be32 */
/* RPC_REPLY writes must be routed to the pid/cid of the
* RPC_CALL they are in reply to. Keep a cache of valid
* xid/pid/cid groups. pid 0xffffffff -> not valid.
/* reply remote address
* if reply_pid == 0xffffffff, none available
* RPC_REPLY writes may only go to the pid/cid/xid of the
* last RPC_CALL we received.
*/
unsigned next_rroute;
struct msm_reply_route rroute[MAX_REPLY_ROUTE];
uint32_t reply_pid;
uint32_t reply_cid;
uint32_t reply_xid; /* be32 */
uint32_t next_pm; /* Pacmark sequence */
#if defined(CONFIG_ARCH_MSM7X30)
/* reply queue for inbound messages */
@ -232,7 +224,6 @@ void msm_rpcrouter_exit_devices(void);
#if defined(CONFIG_ARCH_MSM7X30)
void get_requesting_client(struct msm_rpc_endpoint *ept, uint32_t xid,
struct msm_rpc_client_info *clnt_info);
int msm_rpc_clear_netreset(struct msm_rpc_endpoint *ept);
#endif
extern dev_t msm_rpcrouter_devno;

View File

@ -26,7 +26,6 @@
#include <linux/fs.h>
#include <linux/err.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/poll.h>
#include <linux/platform_device.h>
#include <linux/msm_rpcrouter.h>

View File

@ -16,7 +16,6 @@
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/cdev.h>

View File

@ -78,8 +78,6 @@
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <mach/msm_rpcrouter.h>
@ -423,7 +421,7 @@ int xdr_send_msg(struct msm_rpc_xdr *xdr)
void xdr_init(struct msm_rpc_xdr *xdr)
{
mutex_init(&xdr->out_lock);
init_waitqueue_head(&xdr->in_buf_wait_q);
mutex_init(&xdr->in_lock);
xdr->in_buf = NULL;
xdr->in_size = 0;
@ -436,7 +434,7 @@ void xdr_init(struct msm_rpc_xdr *xdr)
void xdr_init_input(struct msm_rpc_xdr *xdr, void *buf, uint32_t size)
{
wait_event(xdr->in_buf_wait_q, !(xdr->in_buf));
mutex_lock(&xdr->in_lock);
xdr->in_buf = buf;
xdr->in_size = size;
@ -457,7 +455,7 @@ void xdr_clean_input(struct msm_rpc_xdr *xdr)
xdr->in_size = 0;
xdr->in_index = 0;
wake_up(&xdr->in_buf_wait_q);
mutex_unlock(&xdr->in_lock);
}
void xdr_clean_output(struct msm_rpc_xdr *xdr)

View File

@ -30,7 +30,6 @@
#include "board-htcleo.h"
#define MAX_SMD_TTYS 32
#define MAX_TTY_BUF_SIZE 2048
static DEFINE_MUTEX(smd_tty_lock);
@ -76,9 +75,6 @@ static void smd_tty_work_func(struct work_struct *work)
tty->low_latency = 0;
tty_flip_buffer_push(tty);
break;
if (avail > MAX_TTY_BUF_SIZE)
avail = MAX_TTY_BUF_SIZE;
}
ptr = NULL;

View File

@ -169,10 +169,18 @@ static int msm_timer_set_next_event(unsigned long cycles,
clock->last_set = now;
clock->alarm_vtime = alarm + clock->offset;
late = now - alarm;
if (late >= (int)(-clock->write_delay << clock->shift) &&
late < clock->freq*5)
if (late >= (int)(-clock->write_delay << clock->shift) && late < DGT_HZ*5) {
static int print_limit = 10;
if (print_limit > 0) {
print_limit--;
printk(KERN_NOTICE "msm_timer_set_next_event(%lu) "
"clock %s, alarm already expired, now %x, "
"alarm %x, late %d%s\n",
cycles, clock->clockevent.name, now, alarm, late,
print_limit ? "" : " stop printing");
}
return -ETIME;
}
return 0;
}
@ -574,12 +582,9 @@ static struct msm_clock msm_clocks[] = {
#endif
.freq = GPT_HZ,
.flags =
#ifdef CONFIG_ARCH_MSM_ARM11
MSM_CLOCK_FLAGS_UNSTABLE_COUNT |
MSM_CLOCK_FLAGS_ODD_MATCH_WRITE |
MSM_CLOCK_FLAGS_DELAYED_WRITE_POST |
#endif
0,
MSM_CLOCK_FLAGS_DELAYED_WRITE_POST,
.write_delay = 9,
},
[MSM_CLOCK_DGT] = {

View File

@ -151,18 +151,4 @@ config SYS_HYPERVISOR
bool
default n
config GENLOCK
bool "Enable a generic cross-process locking mechanism"
depends on ANON_INODES
help
Enable a generic cross-process locking API to provide protection
for shared memory objects such as graphics buffers.
config GENLOCK_MISCDEVICE
bool "Enable a misc-device for userspace to access the genlock engine"
depends on GENLOCK
help
Create a miscdevice for the purposes of allowing userspace to create
and interact with locks created using genlock.
endmenu

View File

@ -8,7 +8,6 @@ obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
obj-$(CONFIG_GENLOCK) += genlock.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o

View File

@ -1,640 +0,0 @@
/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/fb.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/list.h>
#include <linux/file.h>
#include <linux/sched.h>
#include <linux/fs.h>
#include <linux/wait.h>
#include <linux/uaccess.h>
#include <linux/anon_inodes.h>
#include <linux/miscdevice.h>
#include <linux/genlock.h>
#include <linux/interrupt.h> /* for in_interrupt() */
/* Lock states - can either be unlocked, held as an exclusive write lock or a
* shared read lock
*/
#define _UNLOCKED 0
#define _RDLOCK GENLOCK_RDLOCK
#define _WRLOCK GENLOCK_WRLOCK
struct genlock {
struct list_head active; /* List of handles holding lock */
spinlock_t lock; /* Spinlock to protect the lock internals */
wait_queue_head_t queue; /* Holding pen for processes pending lock */
struct file *file; /* File structure for exported lock */
int state; /* Current state of the lock */
struct kref refcount;
};
struct genlock_handle {
struct genlock *lock; /* Lock currently attached to the handle */
struct list_head entry; /* List node for attaching to a lock */
struct file *file; /* File structure associated with handle */
int active; /* Number of times the active lock has been
taken */
};
static void genlock_destroy(struct kref *kref)
{
struct genlock *lock = container_of(kref, struct genlock,
refcount);
kfree(lock);
}
/*
* Release the genlock object. Called when all the references to
* the genlock file descriptor are released
*/
static int genlock_release(struct inode *inodep, struct file *file)
{
return 0;
}
static const struct file_operations genlock_fops = {
.release = genlock_release,
};
/**
* genlock_create_lock - Create a new lock
* @handle - genlock handle to attach the lock to
*
* Returns: a pointer to the genlock
*/
struct genlock *genlock_create_lock(struct genlock_handle *handle)
{
struct genlock *lock;
if (handle->lock != NULL)
return ERR_PTR(-EINVAL);
lock = kzalloc(sizeof(*lock), GFP_KERNEL);
if (lock == NULL)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&lock->active);
init_waitqueue_head(&lock->queue);
spin_lock_init(&lock->lock);
lock->state = _UNLOCKED;
/*
* Create an anonyonmous inode for the object that can exported to
* other processes
*/
lock->file = anon_inode_getfile("genlock", &genlock_fops,
lock, O_RDWR);
/* Attach the new lock to the handle */
handle->lock = lock;
kref_init(&lock->refcount);
return lock;
}
EXPORT_SYMBOL(genlock_create_lock);
/*
* Get a file descriptor reference to a lock suitable for sharing with
* other processes
*/
static int genlock_get_fd(struct genlock *lock)
{
int ret;
if (!lock->file)
return -EINVAL;
ret = get_unused_fd_flags(0);
if (ret < 0)
return ret;
fd_install(ret, lock->file);
return ret;
}
/**
* genlock_attach_lock - Attach an existing lock to a handle
* @handle - Pointer to a genlock handle to attach the lock to
* @fd - file descriptor for the exported lock
*
* Returns: A pointer to the attached lock structure
*/
struct genlock *genlock_attach_lock(struct genlock_handle *handle, int fd)
{
struct file *file;
struct genlock *lock;
if (handle->lock != NULL)
return ERR_PTR(-EINVAL);
file = fget(fd);
if (file == NULL)
return ERR_PTR(-EBADF);
lock = file->private_data;
fput(file);
if (lock == NULL)
return ERR_PTR(-EINVAL);
handle->lock = lock;
kref_get(&lock->refcount);
return lock;
}
EXPORT_SYMBOL(genlock_attach_lock);
/* Helper function that returns 1 if the specified handle holds the lock */
static int handle_has_lock(struct genlock *lock, struct genlock_handle *handle)
{
struct genlock_handle *h;
list_for_each_entry(h, &lock->active, entry) {
if (h == handle)
return 1;
}
return 0;
}
/* If the lock just became available, signal the next entity waiting for it */
static void _genlock_signal(struct genlock *lock)
{
if (list_empty(&lock->active)) {
/* If the list is empty, then the lock is free */
lock->state = _UNLOCKED;
/* Wake up the first process sitting in the queue */
wake_up(&lock->queue);
}
}
/* Attempt to release the handle's ownership of the lock */
static int _genlock_unlock(struct genlock *lock, struct genlock_handle *handle)
{
int ret = -EINVAL;
unsigned long irqflags;
spin_lock_irqsave(&lock->lock, irqflags);
if (lock->state == _UNLOCKED)
goto done;
/* Make sure this handle is an owner of the lock */
if (!handle_has_lock(lock, handle))
goto done;
/* If the handle holds no more references to the lock then
release it (maybe) */
if (--handle->active == 0) {
list_del(&handle->entry);
_genlock_signal(lock);
}
ret = 0;
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/* Attempt to acquire the lock for the handle */
static int _genlock_lock(struct genlock *lock, struct genlock_handle *handle,
int op, int flags, uint32_t timeout)
{
unsigned long irqflags;
int ret = 0;
unsigned int ticks = msecs_to_jiffies(timeout);
spin_lock_irqsave(&lock->lock, irqflags);
/* Sanity check - no blocking locks in a debug context. Even if it
* succeed to not block, the mere idea is too dangerous to continue
*/
if (in_interrupt() && !(flags & GENLOCK_NOBLOCK))
BUG();
/* Fast path - the lock is unlocked, so go do the needful */
if (lock->state == _UNLOCKED)
goto dolock;
if (handle_has_lock(lock, handle)) {
/*
* If the handle already holds the lock and the type matches,
* then just increment the active pointer. This allows the
* handle to do recursive locks
*/
if (lock->state == op) {
handle->active++;
goto done;
}
/*
* If the handle holds a write lock then the owner can switch
* to a read lock if they want. Do the transition atomically
* then wake up any pending waiters in case they want a read
* lock too.
*/
if (op == _RDLOCK && handle->active == 1) {
lock->state = _RDLOCK;
wake_up(&lock->queue);
goto done;
}
/*
* Otherwise the user tried to turn a read into a write, and we
* don't allow that.
*/
ret = -EINVAL;
goto done;
}
/*
* If we request a read and the lock is held by a read, then go
* ahead and share the lock
*/
if (op == GENLOCK_RDLOCK && lock->state == _RDLOCK)
goto dolock;
/* Treat timeout 0 just like a NOBLOCK flag and return if the
lock cannot be aquired without blocking */
if (flags & GENLOCK_NOBLOCK || timeout == 0) {
ret = -EAGAIN;
goto done;
}
/* Wait while the lock remains in an incompatible state */
while (lock->state != _UNLOCKED) {
unsigned int elapsed;
spin_unlock_irqrestore(&lock->lock, irqflags);
elapsed = wait_event_interruptible_timeout(lock->queue,
lock->state == _UNLOCKED, ticks);
spin_lock_irqsave(&lock->lock, irqflags);
if (elapsed <= 0) {
ret = (elapsed < 0) ? elapsed : -ETIMEDOUT;
goto done;
}
ticks = elapsed;
}
dolock:
/* We can now get the lock, add ourselves to the list of owners */
list_add_tail(&handle->entry, &lock->active);
lock->state = op;
handle->active = 1;
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/**
* genlock_lock - Acquire or release a lock
* @handle - pointer to the genlock handle that is requesting the lock
* @op - the operation to perform (RDLOCK, WRLOCK, UNLOCK)
* @flags - flags to control the operation
* @timeout - optional timeout to wait for the lock to come free
*
* Returns: 0 on success or error code on failure
*/
int genlock_lock(struct genlock_handle *handle, int op, int flags,
uint32_t timeout)
{
struct genlock *lock = handle->lock;
int ret = 0;
if (lock == NULL)
return -EINVAL;
switch (op) {
case GENLOCK_UNLOCK:
ret = _genlock_unlock(lock, handle);
break;
case GENLOCK_RDLOCK:
case GENLOCK_WRLOCK:
ret = _genlock_lock(lock, handle, op, flags, timeout);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}
EXPORT_SYMBOL(genlock_lock);
/**
* genlock_wait - Wait for the lock to be released
* @handle - pointer to the genlock handle that is waiting for the lock
* @timeout - optional timeout to wait for the lock to get released
*/
int genlock_wait(struct genlock_handle *handle, uint32_t timeout)
{
struct genlock *lock = handle->lock;
unsigned long irqflags;
int ret = 0;
unsigned int ticks = msecs_to_jiffies(timeout);
if (lock == NULL)
return -EINVAL;
spin_lock_irqsave(&lock->lock, irqflags);
/*
* if timeout is 0 and the lock is already unlocked, then success
* otherwise return -EAGAIN
*/
if (timeout == 0) {
ret = (lock->state == _UNLOCKED) ? 0 : -EAGAIN;
goto done;
}
while (lock->state != _UNLOCKED) {
unsigned int elapsed;
spin_unlock_irqrestore(&lock->lock, irqflags);
elapsed = wait_event_interruptible_timeout(lock->queue,
lock->state == _UNLOCKED, ticks);
spin_lock_irqsave(&lock->lock, irqflags);
if (elapsed <= 0) {
ret = (elapsed < 0) ? elapsed : -ETIMEDOUT;
break;
}
ticks = elapsed;
}
done:
spin_unlock_irqrestore(&lock->lock, irqflags);
return ret;
}
/**
* genlock_release_lock - Release a lock attached to a handle
* @handle - Pointer to the handle holding the lock
*/
void genlock_release_lock(struct genlock_handle *handle)
{
unsigned long flags;
if (handle == NULL || handle->lock == NULL)
return;
spin_lock_irqsave(&handle->lock->lock, flags);
/* If the handle is holding the lock, then force it closed */
if (handle_has_lock(handle->lock, handle)) {
list_del(&handle->entry);
_genlock_signal(handle->lock);
}
spin_unlock_irqrestore(&handle->lock->lock, flags);
kref_put(&handle->lock->refcount, genlock_destroy);
handle->lock = NULL;
handle->active = 0;
}
EXPORT_SYMBOL(genlock_release_lock);
/*
* Release function called when all references to a handle are released
*/
static int genlock_handle_release(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = file->private_data;
genlock_release_lock(handle);
kfree(handle);
return 0;
}
static const struct file_operations genlock_handle_fops = {
.release = genlock_handle_release
};
/*
* Allocate a new genlock handle
*/
static struct genlock_handle *_genlock_get_handle(void)
{
struct genlock_handle *handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (handle == NULL)
return ERR_PTR(-ENOMEM);
return handle;
}
/**
* genlock_get_handle - Create a new genlock handle
*
* Returns: A pointer to a new genlock handle
*/
struct genlock_handle *genlock_get_handle(void)
{
struct genlock_handle *handle = _genlock_get_handle();
if (IS_ERR(handle))
return handle;
handle->file = anon_inode_getfile("genlock-handle",
&genlock_handle_fops, handle, O_RDWR);
return handle;
}
EXPORT_SYMBOL(genlock_get_handle);
/**
* genlock_put_handle - release a reference to a genlock handle
* @handle - A pointer to the handle to release
*/
void genlock_put_handle(struct genlock_handle *handle)
{
if (handle)
fput(handle->file);
}
EXPORT_SYMBOL(genlock_put_handle);
/**
* genlock_get_handle_fd - Get a handle reference from a file descriptor
* @fd - The file descriptor for a genlock handle
*/
struct genlock_handle *genlock_get_handle_fd(int fd)
{
struct file *file = fget(fd);
if (file == NULL)
return ERR_PTR(-EINVAL);
return file->private_data;
}
EXPORT_SYMBOL(genlock_get_handle_fd);
#ifdef CONFIG_GENLOCK_MISCDEVICE
static long genlock_dev_ioctl(struct file *filep, unsigned int cmd,
unsigned long arg)
{
struct genlock_lock param;
struct genlock_handle *handle = filep->private_data;
struct genlock *lock;
int ret;
switch (cmd) {
case GENLOCK_IOC_NEW: {
lock = genlock_create_lock(handle);
if (IS_ERR(lock))
return PTR_ERR(lock);
return 0;
}
case GENLOCK_IOC_EXPORT: {
if (handle->lock == NULL)
return -EINVAL;
ret = genlock_get_fd(handle->lock);
if (ret < 0)
return ret;
param.fd = ret;
if (copy_to_user((void __user *) arg, &param,
sizeof(param)))
return -EFAULT;
return 0;
}
case GENLOCK_IOC_ATTACH: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
lock = genlock_attach_lock(handle, param.fd);
if (IS_ERR(lock))
return PTR_ERR(lock);
return 0;
}
case GENLOCK_IOC_LOCK: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
return genlock_lock(handle, param.op, param.flags,
param.timeout);
}
case GENLOCK_IOC_WAIT: {
if (copy_from_user(&param, (void __user *) arg,
sizeof(param)))
return -EFAULT;
return genlock_wait(handle, param.timeout);
}
case GENLOCK_IOC_RELEASE: {
genlock_release_lock(handle);
return 0;
}
default:
return -EINVAL;
}
}
static int genlock_dev_release(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = file->private_data;
genlock_release_lock(handle);
kfree(handle);
return 0;
}
static int genlock_dev_open(struct inode *inodep, struct file *file)
{
struct genlock_handle *handle = _genlock_get_handle();
if (IS_ERR(handle))
return PTR_ERR(handle);
handle->file = file;
file->private_data = handle;
return 0;
}
static const struct file_operations genlock_dev_fops = {
.open = genlock_dev_open,
.release = genlock_dev_release,
.unlocked_ioctl = genlock_dev_ioctl,
};
static struct miscdevice genlock_dev;
static int genlock_dev_init(void)
{
genlock_dev.minor = MISC_DYNAMIC_MINOR;
genlock_dev.name = "genlock";
genlock_dev.fops = &genlock_dev_fops;
genlock_dev.parent = NULL;
return misc_register(&genlock_dev);
}
static void genlock_dev_close(void)
{
misc_deregister(&genlock_dev);
}
module_init(genlock_dev_init);
module_exit(genlock_dev_close);
#endif

View File

@ -1,2 +1 @@
obj-y += drm/ vga/
obj-$(CONFIG_MSM_KGSL) += msm/

View File

@ -1,113 +0,0 @@
config MSM_KGSL
tristate "MSM 3D Graphics driver"
default n
depends on ARCH_MSM && !ARCH_MSM7X00A && !ARCH_MSM7X25
select GENERIC_ALLOCATOR
select FW_LOADER
---help---
3D graphics driver. Required to use hardware accelerated
OpenGL ES 2.0 and 1.1.
config MSM_KGSL_CFF_DUMP
bool "Enable KGSL Common File Format (CFF) Dump Feature [Use with caution]"
default n
depends on MSM_KGSL
select RELAY
---help---
This is an analysis and diagnostic feature only, and should only be
turned on during KGSL GPU diagnostics and will slow down the KGSL
performance sigificantly, hence *do not use in production builds*.
When enabled, CFF Dump is on at boot. It can be turned off at runtime
via 'echo 0 > /d/kgsl/cff_dump'. The log can be captured via
/d/kgsl-cff/cpu[0|1].
config MSM_KGSL_CFF_DUMP_NO_CONTEXT_MEM_DUMP
bool "When selected will disable KGSL CFF Dump for context switches"
default n
depends on MSM_KGSL_CFF_DUMP
---help---
Dumping all the memory for every context switch can produce quite
huge log files, to reduce this, turn this feature on.
config MSM_KGSL_PSTMRTMDMP_CP_STAT_NO_DETAIL
bool "Disable human readable CP_STAT fields in post-mortem dump"
default n
depends on MSM_KGSL
---help---
For a more compact kernel log the human readable output of
CP_STAT can be turned off with this option.
config MSM_KGSL_PSTMRTMDMP_NO_IB_DUMP
bool "Disable dumping current IB1 and IB2 in post-mortem dump"
default n
depends on MSM_KGSL
---help---
For a more compact kernel log the IB1 and IB2 embedded dump
can be turned off with this option. Some IB dumps take up
so much space that vital other information gets cut from the
post-mortem dump.
config MSM_KGSL_PSTMRTMDMP_RB_HEX
bool "Use hex version for ring-buffer in post-mortem dump"
default n
depends on MSM_KGSL
---help---
Use hex version for the ring-buffer in the post-mortem dump, instead
of the human readable version.
config MSM_KGSL_2D
tristate "MSM 2D graphics driver. Required for OpenVG"
default y
depends on MSM_KGSL && !ARCH_MSM7X27 && !ARCH_MSM7X27A && !(ARCH_QSD8X50 && !MSM_SOC_REV_A)
config MSM_KGSL_DRM
bool "Build a DRM interface for the MSM_KGSL driver"
depends on MSM_KGSL && DRM
config MSM_KGSL_GPUMMU
bool "Enable the GPU MMU in the MSM_KGSL driver"
depends on MSM_KGSL && !MSM_KGSL_CFF_DUMP
default y
config MSM_KGSL_IOMMU
bool "Enable the use of IOMMU in the MSM_KGSL driver"
depends on MSM_KGSL && MSM_IOMMU && !MSM_KGSL_GPUMMU && !MSM_KGSL_CFF_DUMP
config MSM_KGSL_MMU
bool
depends on MSM_KGSL_GPUMMU || MSM_KGSL_IOMMU
default y
config KGSL_PER_PROCESS_PAGE_TABLE
bool "Enable Per Process page tables for the KGSL driver"
default n
depends on MSM_KGSL_GPUMMU && !MSM_KGSL_DRM
---help---
The MMU will use per process pagetables when enabled.
config MSM_KGSL_PAGE_TABLE_SIZE
hex "Size of pagetables"
default 0xFFF0000
---help---
Sets the pagetable size used by the MMU. The max value
is 0xFFF0000 or (256M - 64K).
config MSM_KGSL_PAGE_TABLE_COUNT
int "Minimum of concurrent pagetables to support"
default 8
depends on KGSL_PER_PROCESS_PAGE_TABLE
---help---
Specify the number of pagetables to allocate at init time
This is the number of concurrent processes that are guaranteed to
to run at any time. Additional processes can be created dynamically
assuming there is enough contiguous memory to allocate the pagetable.
config MSM_KGSL_MMU_PAGE_FAULT
bool "Force the GPU MMU to page fault for unmapped regions"
default y
depends on MSM_KGSL_GPUMMU
config MSM_KGSL_DISABLE_SHADOW_WRITES
bool "Disable register shadow writes for context switches"
default n
depends on MSM_KGSL

View File

@ -1,34 +0,0 @@
ccflags-y := -Iinclude/drm
msm_kgsl_core-y = \
kgsl.o \
kgsl_sharedmem.o \
kgsl_pwrctrl.o \
kgsl_pwrscale.o \
kgsl_mmu.o \
kgsl_gpummu.o
msm_kgsl_core-$(CONFIG_DEBUG_FS) += kgsl_debugfs.o
msm_kgsl_core-$(CONFIG_MSM_KGSL_CFF_DUMP) += kgsl_cffdump.o
msm_kgsl_core-$(CONFIG_MSM_KGSL_DRM) += kgsl_drm.o
msm_kgsl_core-$(CONFIG_MSM_SCM) += kgsl_pwrscale_trustzone.o
msm_kgsl_core-$(CONFIG_MSM_SLEEP_STATS) += kgsl_pwrscale_idlestats.o
msm_adreno-y += \
adreno_ringbuffer.o \
adreno_drawctxt.o \
adreno_postmortem.o \
adreno_a2xx.o \
adreno.o
msm_adreno-$(CONFIG_DEBUG_FS) += adreno_debugfs.o
msm_z180-y += z180.o
msm_kgsl_core-objs = $(msm_kgsl_core-y)
msm_adreno-objs = $(msm_adreno-y)
msm_z180-objs = $(msm_z180-y)
obj-$(CONFIG_MSM_KGSL) += msm_kgsl_core.o
obj-$(CONFIG_MSM_KGSL) += msm_adreno.o
obj-$(CONFIG_MSM_KGSL_2D) += msm_z180.o

File diff suppressed because it is too large Load Diff

View File

@ -1,129 +0,0 @@
/* Copyright (c) 2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_H
#define __ADRENO_H
#include "kgsl_device.h"
#include "adreno_drawctxt.h"
#include "adreno_ringbuffer.h"
#define DEVICE_3D_NAME "kgsl-3d"
#define DEVICE_3D0_NAME "kgsl-3d0"
#define ADRENO_DEVICE(device) \
KGSL_CONTAINER_OF(device, struct adreno_device, dev)
/* Flags to control command packet settings */
#define KGSL_CMD_FLAGS_PMODE 0x00000001
#define KGSL_CMD_FLAGS_NO_TS_CMP 0x00000002
#define KGSL_CMD_FLAGS_NOT_KERNEL_CMD 0x00000004
/* Command identifiers */
#define KGSL_CONTEXT_TO_MEM_IDENTIFIER 0xDEADBEEF
#define KGSL_CMD_IDENTIFIER 0xFEEDFACE
#ifdef CONFIG_MSM_SCM
#define ADRENO_DEFAULT_PWRSCALE_POLICY (&kgsl_pwrscale_policy_tz)
#else
#define ADRENO_DEFAULT_PWRSCALE_POLICY NULL
#endif
enum adreno_gpurev {
ADRENO_REV_UNKNOWN = 0,
ADRENO_REV_A200 = 200,
ADRENO_REV_A205 = 205,
ADRENO_REV_A220 = 220,
ADRENO_REV_A225 = 225,
};
struct adreno_gpudev;
struct adreno_device {
struct kgsl_device dev; /* Must be first field in this struct */
unsigned int chip_id;
enum adreno_gpurev gpurev;
struct kgsl_memregion gmemspace;
struct adreno_context *drawctxt_active;
const char *pfp_fwfile;
unsigned int *pfp_fw;
size_t pfp_fw_size;
const char *pm4_fwfile;
unsigned int *pm4_fw;
size_t pm4_fw_size;
struct adreno_ringbuffer ringbuffer;
unsigned int mharb;
struct adreno_gpudev *gpudev;
unsigned int wait_timeout;
};
struct adreno_gpudev {
int (*ctxt_gpustate_shadow)(struct adreno_device *,
struct adreno_context *);
int (*ctxt_gmem_shadow)(struct adreno_device *,
struct adreno_context *);
void (*ctxt_save)(struct adreno_device *, struct adreno_context *);
void (*ctxt_restore)(struct adreno_device *, struct adreno_context *);
irqreturn_t (*irq_handler)(struct adreno_device *);
void (*irq_control)(struct adreno_device *, int);
};
extern struct adreno_gpudev adreno_a2xx_gpudev;
int adreno_idle(struct kgsl_device *device, unsigned int timeout);
void adreno_regread(struct kgsl_device *device, unsigned int offsetwords,
unsigned int *value);
void adreno_regwrite(struct kgsl_device *device, unsigned int offsetwords,
unsigned int value);
uint8_t *kgsl_sharedmem_convertaddr(struct kgsl_device *device,
unsigned int pt_base, unsigned int gpuaddr, unsigned int *size);
static inline int adreno_is_a200(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A200);
}
static inline int adreno_is_a205(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A200);
}
static inline int adreno_is_a20x(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A200 ||
adreno_dev->gpurev == ADRENO_REV_A205);
}
static inline int adreno_is_a220(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A220);
}
static inline int adreno_is_a225(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A225);
}
static inline int adreno_is_a22x(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev == ADRENO_REV_A220 ||
adreno_dev->gpurev == ADRENO_REV_A225);
}
static inline int adreno_is_a2xx(struct adreno_device *adreno_dev)
{
return (adreno_dev->gpurev <= ADRENO_REV_A225);
}
#endif /*__ADRENO_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,455 +0,0 @@
/* Copyright (c) 2002,2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/delay.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h>
#include <linux/io.h>
#include "kgsl.h"
#include "adreno_postmortem.h"
#include "adreno.h"
#include "a2xx_reg.h"
unsigned int kgsl_cff_dump_enable;
int kgsl_pm_regs_enabled;
static uint32_t kgsl_ib_base;
static uint32_t kgsl_ib_size;
static struct dentry *pm_d_debugfs;
static int pm_dump_set(void *data, u64 val)
{
struct kgsl_device *device = data;
if (val) {
mutex_lock(&device->mutex);
adreno_postmortem_dump(device, 1);
mutex_unlock(&device->mutex);
}
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(pm_dump_fops,
NULL,
pm_dump_set, "%llu\n");
static int pm_regs_enabled_set(void *data, u64 val)
{
kgsl_pm_regs_enabled = val ? 1 : 0;
return 0;
}
static int pm_regs_enabled_get(void *data, u64 *val)
{
*val = kgsl_pm_regs_enabled;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(pm_regs_enabled_fops,
pm_regs_enabled_get,
pm_regs_enabled_set, "%llu\n");
static int kgsl_cff_dump_enable_set(void *data, u64 val)
{
#ifdef CONFIG_MSM_KGSL_CFF_DUMP
kgsl_cff_dump_enable = (val != 0);
return 0;
#else
return -EINVAL;
#endif
}
static int kgsl_cff_dump_enable_get(void *data, u64 *val)
{
*val = kgsl_cff_dump_enable;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(kgsl_cff_dump_enable_fops, kgsl_cff_dump_enable_get,
kgsl_cff_dump_enable_set, "%llu\n");
static int kgsl_dbgfs_open(struct inode *inode, struct file *file)
{
file->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE);
file->private_data = inode->i_private;
return 0;
}
static int kgsl_dbgfs_release(struct inode *inode, struct file *file)
{
return 0;
}
static int kgsl_hex_dump(const char *prefix, int c, uint8_t *data,
int rowc, int linec, char __user *buff)
{
int ss;
/* Prefix of 20 chars max, 32 bytes per row, in groups of four - that's
* 8 groups at 8 chars per group plus a space, plus new-line, plus
* ending character */
char linebuf[20 + 64 + 1 + 1];
ss = snprintf(linebuf, sizeof(linebuf), prefix, c);
hex_dump_to_buffer(data, linec, rowc, 4, linebuf+ss,
sizeof(linebuf)-ss, 0);
strlcat(linebuf, "\n", sizeof(linebuf));
linebuf[sizeof(linebuf)-1] = 0;
ss = strlen(linebuf);
if (copy_to_user(buff, linebuf, ss+1))
return -EFAULT;
return ss;
}
static ssize_t kgsl_ib_dump_read(
struct file *file,
char __user *buff,
size_t buff_count,
loff_t *ppos)
{
int i, count = kgsl_ib_size, remaining, pos = 0, tot = 0, ss;
struct kgsl_device *device = file->private_data;
const int rowc = 32;
unsigned int pt_base, ib_memsize;
uint8_t *base_addr;
char linebuf[80];
if (!ppos || !device || !kgsl_ib_base)
return 0;
kgsl_regread(device, MH_MMU_PT_BASE, &pt_base);
base_addr = kgsl_sharedmem_convertaddr(device, pt_base, kgsl_ib_base,
&ib_memsize);
if (!base_addr)
return 0;
pr_info("%s ppos=%ld, buff_count=%d, count=%d\n", __func__, (long)*ppos,
buff_count, count);
ss = snprintf(linebuf, sizeof(linebuf), "IB: base=%08x(%08x"
"), size=%d, memsize=%d\n", kgsl_ib_base,
(uint32_t)base_addr, kgsl_ib_size, ib_memsize);
if (*ppos == 0) {
if (copy_to_user(buff, linebuf, ss+1))
return -EFAULT;
tot += ss;
buff += ss;
*ppos += ss;
}
pos += ss;
remaining = count;
for (i = 0; i < count; i += rowc) {
int linec = min(remaining, rowc);
remaining -= rowc;
ss = kgsl_hex_dump("IB: %05x: ", i, base_addr, rowc, linec,
buff);
if (ss < 0)
return ss;
if (pos >= *ppos) {
if (tot+ss >= buff_count) {
ss = copy_to_user(buff, "", 1);
return tot;
}
tot += ss;
buff += ss;
*ppos += ss;
}
pos += ss;
base_addr += linec;
}
return tot;
}
static ssize_t kgsl_ib_dump_write(
struct file *file,
const char __user *buff,
size_t count,
loff_t *ppos)
{
char local_buff[64];
if (count >= sizeof(local_buff))
return -EFAULT;
if (copy_from_user(local_buff, buff, count))
return -EFAULT;
local_buff[count] = 0; /* end of string */
sscanf(local_buff, "%x %d", &kgsl_ib_base, &kgsl_ib_size);
pr_info("%s: base=%08X size=%d\n", __func__, kgsl_ib_base,
kgsl_ib_size);
return count;
}
static const struct file_operations kgsl_ib_dump_fops = {
.open = kgsl_dbgfs_open,
.release = kgsl_dbgfs_release,
.read = kgsl_ib_dump_read,
.write = kgsl_ib_dump_write,
};
static int kgsl_regread_nolock(struct kgsl_device *device,
unsigned int offsetwords, unsigned int *value)
{
unsigned int *reg;
if (offsetwords*sizeof(uint32_t) >= device->regspace.sizebytes) {
KGSL_DRV_ERR(device, "invalid offset %d\n", offsetwords);
return -ERANGE;
}
reg = (unsigned int *)(device->regspace.mmio_virt_base
+ (offsetwords << 2));
*value = __raw_readl(reg);
return 0;
}
#define KGSL_ISTORE_START 0x5000
#define KGSL_ISTORE_LENGTH 0x600
static ssize_t kgsl_istore_read(
struct file *file,
char __user *buff,
size_t buff_count,
loff_t *ppos)
{
int i, count = KGSL_ISTORE_LENGTH, remaining, pos = 0, tot = 0;
struct kgsl_device *device = file->private_data;
const int rowc = 8;
if (!ppos || !device)
return 0;
remaining = count;
for (i = 0; i < count; i += rowc) {
unsigned int vals[rowc];
int j, ss;
int linec = min(remaining, rowc);
remaining -= rowc;
if (pos >= *ppos) {
for (j = 0; j < linec; ++j)
kgsl_regread_nolock(device,
KGSL_ISTORE_START+i+j, vals+j);
} else
memset(vals, 0, sizeof(vals));
ss = kgsl_hex_dump("IS: %04x: ", i, (uint8_t *)vals, rowc*4,
linec*4, buff);
if (ss < 0)
return ss;
if (pos >= *ppos) {
if (tot+ss >= buff_count)
return tot;
tot += ss;
buff += ss;
*ppos += ss;
}
pos += ss;
}
return tot;
}
static const struct file_operations kgsl_istore_fops = {
.open = kgsl_dbgfs_open,
.release = kgsl_dbgfs_release,
.read = kgsl_istore_read,
.llseek = default_llseek,
};
typedef void (*reg_read_init_t)(struct kgsl_device *device);
typedef void (*reg_read_fill_t)(struct kgsl_device *device, int i,
unsigned int *vals, int linec);
static ssize_t kgsl_reg_read(struct kgsl_device *device, int count,
reg_read_init_t reg_read_init,
reg_read_fill_t reg_read_fill, const char *prefix, char __user *buff,
loff_t *ppos)
{
int i, remaining;
const int rowc = 8;
if (!ppos || *ppos || !device)
return 0;
mutex_lock(&device->mutex);
reg_read_init(device);
remaining = count;
for (i = 0; i < count; i += rowc) {
unsigned int vals[rowc];
int ss;
int linec = min(remaining, rowc);
remaining -= rowc;
reg_read_fill(device, i, vals, linec);
ss = kgsl_hex_dump(prefix, i, (uint8_t *)vals, rowc*4, linec*4,
buff);
if (ss < 0) {
mutex_unlock(&device->mutex);
return ss;
}
buff += ss;
*ppos += ss;
}
mutex_unlock(&device->mutex);
return *ppos;
}
static void kgsl_sx_reg_read_init(struct kgsl_device *device)
{
kgsl_regwrite(device, REG_RBBM_PM_OVERRIDE2, 0xFF);
kgsl_regwrite(device, REG_RBBM_DEBUG_CNTL, 0);
}
static void kgsl_sx_reg_read_fill(struct kgsl_device *device, int i,
unsigned int *vals, int linec)
{
int j;
for (j = 0; j < linec; ++j) {
kgsl_regwrite(device, REG_RBBM_DEBUG_CNTL, 0x1B00 | i);
kgsl_regread(device, REG_RBBM_DEBUG_OUT, vals+j);
}
}
static ssize_t kgsl_sx_debug_read(
struct file *file,
char __user *buff,
size_t buff_count,
loff_t *ppos)
{
struct kgsl_device *device = file->private_data;
return kgsl_reg_read(device, 0x1B, kgsl_sx_reg_read_init,
kgsl_sx_reg_read_fill, "SX: %02x: ", buff, ppos);
}
static const struct file_operations kgsl_sx_debug_fops = {
.open = kgsl_dbgfs_open,
.release = kgsl_dbgfs_release,
.read = kgsl_sx_debug_read,
};
static void kgsl_cp_reg_read_init(struct kgsl_device *device)
{
kgsl_regwrite(device, REG_RBBM_DEBUG_CNTL, 0);
}
static void kgsl_cp_reg_read_fill(struct kgsl_device *device, int i,
unsigned int *vals, int linec)
{
int j;
for (j = 0; j < linec; ++j) {
kgsl_regwrite(device, REG_RBBM_DEBUG_CNTL, 0x1628);
kgsl_regread(device, REG_RBBM_DEBUG_OUT, vals+j);
msleep(100);
}
}
static ssize_t kgsl_cp_debug_read(
struct file *file,
char __user *buff,
size_t buff_count,
loff_t *ppos)
{
struct kgsl_device *device = file->private_data;
return kgsl_reg_read(device, 20, kgsl_cp_reg_read_init,
kgsl_cp_reg_read_fill,
"CP: %02x: ", buff, ppos);
}
static const struct file_operations kgsl_cp_debug_fops = {
.open = kgsl_dbgfs_open,
.release = kgsl_dbgfs_release,
.read = kgsl_cp_debug_read,
};
static void kgsl_mh_reg_read_init(struct kgsl_device *device)
{
kgsl_regwrite(device, REG_RBBM_DEBUG_CNTL, 0);
}
static void kgsl_mh_reg_read_fill(struct kgsl_device *device, int i,
unsigned int *vals, int linec)
{
int j;
for (j = 0; j < linec; ++j) {
kgsl_regwrite(device, MH_DEBUG_CTRL, i+j);
kgsl_regread(device, MH_DEBUG_DATA, vals+j);
}
}
static ssize_t kgsl_mh_debug_read(
struct file *file,
char __user *buff,
size_t buff_count,
loff_t *ppos)
{
struct kgsl_device *device = file->private_data;
return kgsl_reg_read(device, 0x40, kgsl_mh_reg_read_init,
kgsl_mh_reg_read_fill,
"MH: %02x: ", buff, ppos);
}
static const struct file_operations kgsl_mh_debug_fops = {
.open = kgsl_dbgfs_open,
.release = kgsl_dbgfs_release,
.read = kgsl_mh_debug_read,
};
void adreno_debugfs_init(struct kgsl_device *device)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
if (!device->d_debugfs || IS_ERR(device->d_debugfs))
return;
debugfs_create_file("ib_dump", 0600, device->d_debugfs, device,
&kgsl_ib_dump_fops);
debugfs_create_file("istore", 0400, device->d_debugfs, device,
&kgsl_istore_fops);
debugfs_create_file("sx_debug", 0400, device->d_debugfs, device,
&kgsl_sx_debug_fops);
debugfs_create_file("cp_debug", 0400, device->d_debugfs, device,
&kgsl_cp_debug_fops);
debugfs_create_file("mh_debug", 0400, device->d_debugfs, device,
&kgsl_mh_debug_fops);
debugfs_create_file("cff_dump", 0644, device->d_debugfs, device,
&kgsl_cff_dump_enable_fops);
debugfs_create_u32("wait_timeout", 0644, device->d_debugfs,
&adreno_dev->wait_timeout);
/* Create post mortem control files */
pm_d_debugfs = debugfs_create_dir("postmortem", device->d_debugfs);
if (IS_ERR(pm_d_debugfs))
return;
debugfs_create_file("dump", 0600, pm_d_debugfs, device,
&pm_dump_fops);
debugfs_create_file("regs_enabled", 0644, pm_d_debugfs, device,
&pm_regs_enabled_fops);
}

View File

@ -1,40 +0,0 @@
/* Copyright (c) 2002,2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_DEBUGFS_H
#define __ADRENO_DEBUGFS_H
#ifdef CONFIG_DEBUG_FS
int adreno_debugfs_init(struct kgsl_device *device);
extern int kgsl_pm_regs_enabled;
static inline int kgsl_pmregs_enabled(void)
{
return kgsl_pm_regs_enabled;
}
#else
static inline int adreno_debugfs_init(struct kgsl_device *device)
{
return 0;
}
static inline int kgsl_pmregs_enabled(void)
{
/* If debugfs is turned off, then always print registers */
return 1;
}
#endif
#endif /* __ADRENO_DEBUGFS_H */

View File

@ -1,266 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/slab.h>
#include "kgsl.h"
#include "kgsl_sharedmem.h"
#include "adreno.h"
/* quad for copying GMEM to context shadow */
#define QUAD_LEN 12
static unsigned int gmem_copy_quad[QUAD_LEN] = {
0x00000000, 0x00000000, 0x3f800000,
0x00000000, 0x00000000, 0x3f800000,
0x00000000, 0x00000000, 0x3f800000,
0x00000000, 0x00000000, 0x3f800000
};
#define TEXCOORD_LEN 8
static unsigned int gmem_copy_texcoord[TEXCOORD_LEN] = {
0x00000000, 0x3f800000,
0x3f800000, 0x3f800000,
0x00000000, 0x00000000,
0x3f800000, 0x00000000
};
/*
* Helper functions
* These are global helper functions used by the GPUs during context switch
*/
/**
* uint2float - convert a uint to IEEE754 single precision float
* @ uintval - value to convert
*/
unsigned int uint2float(unsigned int uintval)
{
unsigned int exp, frac = 0;
if (uintval == 0)
return 0;
exp = ilog2(uintval);
/* Calculate fraction */
if (23 > exp)
frac = (uintval & (~(1 << exp))) << (23 - exp);
/* Exp is biased by 127 and shifted 23 bits */
exp = (exp + 127) << 23;
return exp | frac;
}
static void set_gmem_copy_quad(struct gmem_shadow_t *shadow)
{
/* set vertex buffer values */
gmem_copy_quad[1] = uint2float(shadow->height);
gmem_copy_quad[3] = uint2float(shadow->width);
gmem_copy_quad[4] = uint2float(shadow->height);
gmem_copy_quad[9] = uint2float(shadow->width);
gmem_copy_quad[0] = 0;
gmem_copy_quad[6] = 0;
gmem_copy_quad[7] = 0;
gmem_copy_quad[10] = 0;
memcpy(shadow->quad_vertices.hostptr, gmem_copy_quad, QUAD_LEN << 2);
memcpy(shadow->quad_texcoords.hostptr, gmem_copy_texcoord,
TEXCOORD_LEN << 2);
}
/**
* build_quad_vtxbuff - Create a quad for saving/restoring GMEM
* @ context - Pointer to the context being created
* @ shadow - Pointer to the GMEM shadow structure
* @ incmd - Pointer to pointer to the temporary command buffer
*/
/* quad for saving/restoring gmem */
void build_quad_vtxbuff(struct adreno_context *drawctxt,
struct gmem_shadow_t *shadow, unsigned int **incmd)
{
unsigned int *cmd = *incmd;
/* quad vertex buffer location (in GPU space) */
shadow->quad_vertices.hostptr = cmd;
shadow->quad_vertices.gpuaddr = virt2gpu(cmd, &drawctxt->gpustate);
cmd += QUAD_LEN;
/* tex coord buffer location (in GPU space) */
shadow->quad_texcoords.hostptr = cmd;
shadow->quad_texcoords.gpuaddr = virt2gpu(cmd, &drawctxt->gpustate);
cmd += TEXCOORD_LEN;
set_gmem_copy_quad(shadow);
*incmd = cmd;
}
/**
* adreno_drawctxt_create - create a new adreno draw context
* @device - KGSL device to create the context on
* @pagetable - Pagetable for the context
* @context- Generic KGSL context structure
* @flags - flags for the context (passed from user space)
*
* Create a new draw context for the 3D core. Return 0 on success,
* or error code on failure.
*/
int adreno_drawctxt_create(struct kgsl_device *device,
struct kgsl_pagetable *pagetable,
struct kgsl_context *context, uint32_t flags)
{
struct adreno_context *drawctxt;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
int ret;
drawctxt = kzalloc(sizeof(struct adreno_context), GFP_KERNEL);
if (drawctxt == NULL)
return -ENOMEM;
drawctxt->pagetable = pagetable;
drawctxt->bin_base_offset = 0;
/* FIXME: Deal with preambles */
ret = adreno_dev->gpudev->ctxt_gpustate_shadow(adreno_dev, drawctxt);
if (ret)
goto err;
/* Save the shader instruction memory on context switching */
drawctxt->flags |= CTXT_FLAGS_SHADER_SAVE;
if (!(flags & KGSL_CONTEXT_NO_GMEM_ALLOC)) {
/* create gmem shadow */
ret = adreno_dev->gpudev->ctxt_gmem_shadow(adreno_dev,
drawctxt);
if (ret != 0)
goto err;
}
context->devctxt = drawctxt;
return 0;
err:
kgsl_sharedmem_free(&drawctxt->gpustate);
kfree(drawctxt);
return ret;
}
/**
* adreno_drawctxt_destroy - destroy a draw context
* @device - KGSL device that owns the context
* @context- Generic KGSL context container for the context
*
* Destroy an existing context. Return 0 on success or error
* code on failure.
*/
/* destroy a drawing context */
void adreno_drawctxt_destroy(struct kgsl_device *device,
struct kgsl_context *context)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct adreno_context *drawctxt = context->devctxt;
if (drawctxt == NULL)
return;
/* deactivate context */
if (adreno_dev->drawctxt_active == drawctxt) {
/* no need to save GMEM or shader, the context is
* being destroyed.
*/
drawctxt->flags &= ~(CTXT_FLAGS_GMEM_SAVE |
CTXT_FLAGS_SHADER_SAVE |
CTXT_FLAGS_GMEM_SHADOW |
CTXT_FLAGS_STATE_SHADOW);
adreno_drawctxt_switch(adreno_dev, NULL, 0);
}
adreno_idle(device, KGSL_TIMEOUT_DEFAULT);
kgsl_sharedmem_free(&drawctxt->gpustate);
kgsl_sharedmem_free(&drawctxt->context_gmem_shadow.gmemshadow);
kfree(drawctxt);
context->devctxt = NULL;
}
/**
* adreno_drawctxt_set_bin_base_offset - set bin base offset for the context
* @device - KGSL device that owns the context
* @context- Generic KGSL context container for the context
* @offset - Offset to set
*
* Set the bin base offset for A2XX devices. Not valid for A3XX devices.
*/
void adreno_drawctxt_set_bin_base_offset(struct kgsl_device *device,
struct kgsl_context *context,
unsigned int offset)
{
struct adreno_context *drawctxt = context->devctxt;
if (drawctxt)
drawctxt->bin_base_offset = offset;
}
/**
* adreno_drawctxt_switch - switch the current draw context
* @adreno_dev - The 3D device that owns the context
* @drawctxt - the 3D context to switch to
* @flags - Flags to accompany the switch (from user space)
*
* Switch the current draw context
*/
void adreno_drawctxt_switch(struct adreno_device *adreno_dev,
struct adreno_context *drawctxt,
unsigned int flags)
{
struct kgsl_device *device = &adreno_dev->dev;
if (drawctxt) {
if (flags & KGSL_CONTEXT_SAVE_GMEM)
/* Set the flag in context so that the save is done
* when this context is switched out. */
drawctxt->flags |= CTXT_FLAGS_GMEM_SAVE;
else
/* Remove GMEM saving flag from the context */
drawctxt->flags &= ~CTXT_FLAGS_GMEM_SAVE;
}
/* already current? */
if (adreno_dev->drawctxt_active == drawctxt)
return;
KGSL_CTXT_INFO(device, "from %p to %p flags %d\n",
adreno_dev->drawctxt_active, drawctxt, flags);
/* Save the old context */
adreno_dev->gpudev->ctxt_save(adreno_dev, adreno_dev->drawctxt_active);
/* Set the new context */
adreno_dev->drawctxt_active = drawctxt;
adreno_dev->gpudev->ctxt_restore(adreno_dev, drawctxt);
}

View File

@ -1,151 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_DRAWCTXT_H
#define __ADRENO_DRAWCTXT_H
#include "adreno_pm4types.h"
#include "a2xx_reg.h"
/* Flags */
#define CTXT_FLAGS_NOT_IN_USE 0x00000000
#define CTXT_FLAGS_IN_USE 0x00000001
/* state shadow memory allocated */
#define CTXT_FLAGS_STATE_SHADOW 0x00000010
/* gmem shadow memory allocated */
#define CTXT_FLAGS_GMEM_SHADOW 0x00000100
/* gmem must be copied to shadow */
#define CTXT_FLAGS_GMEM_SAVE 0x00000200
/* gmem can be restored from shadow */
#define CTXT_FLAGS_GMEM_RESTORE 0x00000400
/* shader must be copied to shadow */
#define CTXT_FLAGS_SHADER_SAVE 0x00002000
/* shader can be restored from shadow */
#define CTXT_FLAGS_SHADER_RESTORE 0x00004000
/* Context has caused a GPU hang */
#define CTXT_FLAGS_GPU_HANG 0x00008000
struct kgsl_device;
struct adreno_device;
struct kgsl_device_private;
struct kgsl_context;
/* draw context */
struct gmem_shadow_t {
struct kgsl_memdesc gmemshadow; /* Shadow buffer address */
/* 256 KB GMEM surface = 4 bytes-per-pixel x 256 pixels/row x
* 256 rows. */
/* width & height must be a multiples of 32, in case tiled textures
* are used. */
enum COLORFORMATX format;
unsigned int size; /* Size of surface used to store GMEM */
unsigned int width; /* Width of surface used to store GMEM */
unsigned int height; /* Height of surface used to store GMEM */
unsigned int pitch; /* Pitch of surface used to store GMEM */
unsigned int gmem_pitch; /* Pitch value used for GMEM */
unsigned int *gmem_save_commands;
unsigned int *gmem_restore_commands;
unsigned int gmem_save[3];
unsigned int gmem_restore[3];
struct kgsl_memdesc quad_vertices;
struct kgsl_memdesc quad_texcoords;
};
struct adreno_context {
uint32_t flags;
struct kgsl_pagetable *pagetable;
struct kgsl_memdesc gpustate;
unsigned int reg_save[3];
unsigned int reg_restore[3];
unsigned int shader_save[3];
unsigned int shader_fixup[3];
unsigned int shader_restore[3];
unsigned int chicken_restore[3];
unsigned int bin_base_offset;
/* Information of the GMEM shadow that is created in context create */
struct gmem_shadow_t context_gmem_shadow;
};
int adreno_drawctxt_create(struct kgsl_device *device,
struct kgsl_pagetable *pagetable,
struct kgsl_context *context,
uint32_t flags);
void adreno_drawctxt_destroy(struct kgsl_device *device,
struct kgsl_context *context);
void adreno_drawctxt_switch(struct adreno_device *adreno_dev,
struct adreno_context *drawctxt,
unsigned int flags);
void adreno_drawctxt_set_bin_base_offset(struct kgsl_device *device,
struct kgsl_context *context,
unsigned int offset);
/* GPU context switch helper functions */
void build_quad_vtxbuff(struct adreno_context *drawctxt,
struct gmem_shadow_t *shadow, unsigned int **incmd);
unsigned int uint2float(unsigned int);
static inline unsigned int virt2gpu(unsigned int *cmd,
struct kgsl_memdesc *memdesc)
{
return memdesc->gpuaddr + ((char *) cmd - (char *) memdesc->hostptr);
}
static inline void create_ib1(struct adreno_context *drawctxt,
unsigned int *cmd,
unsigned int *start,
unsigned int *end)
{
cmd[0] = CP_HDR_INDIRECT_BUFFER_PFD;
cmd[1] = virt2gpu(start, &drawctxt->gpustate);
cmd[2] = end - start;
}
static inline unsigned int *reg_range(unsigned int *cmd, unsigned int start,
unsigned int end)
{
*cmd++ = CP_REG(start); /* h/w regs, start addr */
*cmd++ = end - start + 1; /* count */
return cmd;
}
static inline void calc_gmemsize(struct gmem_shadow_t *shadow, int gmem_size)
{
int w = 64, h = 64;
shadow->format = COLORX_8_8_8_8;
/* convert from bytes to 32-bit words */
gmem_size = (gmem_size + 3) / 4;
while ((w * h) < gmem_size) {
if (w < h)
w *= 2;
else
h *= 2;
}
shadow->pitch = shadow->width = w;
shadow->height = h;
shadow->gmem_pitch = shadow->pitch;
shadow->size = shadow->pitch * shadow->height * 4;
}
#endif /* __ADRENO_DRAWCTXT_H */

View File

@ -1,193 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_PM4TYPES_H
#define __ADRENO_PM4TYPES_H
#define CP_PKT_MASK 0xc0000000
#define CP_TYPE0_PKT ((unsigned int)0 << 30)
#define CP_TYPE1_PKT ((unsigned int)1 << 30)
#define CP_TYPE2_PKT ((unsigned int)2 << 30)
#define CP_TYPE3_PKT ((unsigned int)3 << 30)
/* type3 packets */
/* initialize CP's micro-engine */
#define CP_ME_INIT 0x48
/* skip N 32-bit words to get to the next packet */
#define CP_NOP 0x10
/* indirect buffer dispatch. prefetch parser uses this packet type to determine
* whether to pre-fetch the IB
*/
#define CP_INDIRECT_BUFFER 0x3f
/* indirect buffer dispatch. same as IB, but init is pipelined */
#define CP_INDIRECT_BUFFER_PFD 0x37
/* wait for the IDLE state of the engine */
#define CP_WAIT_FOR_IDLE 0x26
/* wait until a register or memory location is a specific value */
#define CP_WAIT_REG_MEM 0x3c
/* wait until a register location is equal to a specific value */
#define CP_WAIT_REG_EQ 0x52
/* wait until a register location is >= a specific value */
#define CP_WAT_REG_GTE 0x53
/* wait until a read completes */
#define CP_WAIT_UNTIL_READ 0x5c
/* wait until all base/size writes from an IB_PFD packet have completed */
#define CP_WAIT_IB_PFD_COMPLETE 0x5d
/* register read/modify/write */
#define CP_REG_RMW 0x21
/* reads register in chip and writes to memory */
#define CP_REG_TO_MEM 0x3e
/* write N 32-bit words to memory */
#define CP_MEM_WRITE 0x3d
/* write CP_PROG_COUNTER value to memory */
#define CP_MEM_WRITE_CNTR 0x4f
/* conditional execution of a sequence of packets */
#define CP_COND_EXEC 0x44
/* conditional write to memory or register */
#define CP_COND_WRITE 0x45
/* generate an event that creates a write to memory when completed */
#define CP_EVENT_WRITE 0x46
/* generate a VS|PS_done event */
#define CP_EVENT_WRITE_SHD 0x58
/* generate a cache flush done event */
#define CP_EVENT_WRITE_CFL 0x59
/* generate a z_pass done event */
#define CP_EVENT_WRITE_ZPD 0x5b
/* initiate fetch of index buffer and draw */
#define CP_DRAW_INDX 0x22
/* draw using supplied indices in packet */
#define CP_DRAW_INDX_2 0x36
/* initiate fetch of index buffer and binIDs and draw */
#define CP_DRAW_INDX_BIN 0x34
/* initiate fetch of bin IDs and draw using supplied indices */
#define CP_DRAW_INDX_2_BIN 0x35
/* begin/end initiator for viz query extent processing */
#define CP_VIZ_QUERY 0x23
/* fetch state sub-blocks and initiate shader code DMAs */
#define CP_SET_STATE 0x25
/* load constant into chip and to memory */
#define CP_SET_CONSTANT 0x2d
/* load sequencer instruction memory (pointer-based) */
#define CP_IM_LOAD 0x27
/* load sequencer instruction memory (code embedded in packet) */
#define CP_IM_LOAD_IMMEDIATE 0x2b
/* load constants from a location in memory */
#define CP_LOAD_CONSTANT_CONTEXT 0x2e
/* selective invalidation of state pointers */
#define CP_INVALIDATE_STATE 0x3b
/* dynamically changes shader instruction memory partition */
#define CP_SET_SHADER_BASES 0x4A
/* sets the 64-bit BIN_MASK register in the PFP */
#define CP_SET_BIN_MASK 0x50
/* sets the 64-bit BIN_SELECT register in the PFP */
#define CP_SET_BIN_SELECT 0x51
/* updates the current context, if needed */
#define CP_CONTEXT_UPDATE 0x5e
/* generate interrupt from the command stream */
#define CP_INTERRUPT 0x40
/* copy sequencer instruction memory to system memory */
#define CP_IM_STORE 0x2c
/*
* for a20x
* program an offset that will added to the BIN_BASE value of
* the 3D_DRAW_INDX_BIN packet
*/
#define CP_SET_BIN_BASE_OFFSET 0x4B
/*
* for a22x
* sets draw initiator flags register in PFP, gets bitwise-ORed into
* every draw initiator
*/
#define CP_SET_DRAW_INIT_FLAGS 0x4B
#define CP_SET_PROTECTED_MODE 0x5f /* sets the register protection mode */
/* packet header building macros */
#define cp_type0_packet(regindx, cnt) \
(CP_TYPE0_PKT | (((cnt)-1) << 16) | ((regindx) & 0x7FFF))
#define cp_type0_packet_for_sameregister(regindx, cnt) \
((CP_TYPE0_PKT | (((cnt)-1) << 16) | ((1 << 15) | \
((regindx) & 0x7FFF)))
#define cp_type1_packet(reg0, reg1) \
(CP_TYPE1_PKT | ((reg1) << 12) | (reg0))
#define cp_type3_packet(opcode, cnt) \
(CP_TYPE3_PKT | (((cnt)-1) << 16) | (((opcode) & 0xFF) << 8))
#define cp_predicated_type3_packet(opcode, cnt) \
(CP_TYPE3_PKT | (((cnt)-1) << 16) | (((opcode) & 0xFF) << 8) | 0x1)
#define cp_nop_packet(cnt) \
(CP_TYPE3_PKT | (((cnt)-1) << 16) | (CP_NOP << 8))
/* packet headers */
#define CP_HDR_ME_INIT cp_type3_packet(CP_ME_INIT, 18)
#define CP_HDR_INDIRECT_BUFFER_PFD cp_type3_packet(CP_INDIRECT_BUFFER_PFD, 2)
#define CP_HDR_INDIRECT_BUFFER cp_type3_packet(CP_INDIRECT_BUFFER, 2)
/* dword base address of the GFX decode space */
#define SUBBLOCK_OFFSET(reg) ((unsigned int)((reg) - (0x2000)))
/* gmem command buffer length */
#define CP_REG(reg) ((0x4 << 16) | (SUBBLOCK_OFFSET(reg)))
#endif /* __ADRENO_PM4TYPES_H */

View File

@ -1,867 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/vmalloc.h>
#include "kgsl.h"
#include "adreno.h"
#include "adreno_pm4types.h"
#include "adreno_ringbuffer.h"
#include "adreno_postmortem.h"
#include "adreno_debugfs.h"
#include "kgsl_cffdump.h"
#include "a2xx_reg.h"
#define INVALID_RB_CMD 0xaaaaaaaa
#define NUM_DWORDS_OF_RINGBUFFER_HISTORY 100
struct pm_id_name {
uint32_t id;
char name[9];
};
static const struct pm_id_name pm0_types[] = {
{REG_PA_SC_AA_CONFIG, "RPASCAAC"},
{REG_RBBM_PM_OVERRIDE2, "RRBBPMO2"},
{REG_SCRATCH_REG2, "RSCRTRG2"},
{REG_SQ_GPR_MANAGEMENT, "RSQGPRMN"},
{REG_SQ_INST_STORE_MANAGMENT, "RSQINSTS"},
{REG_TC_CNTL_STATUS, "RTCCNTLS"},
{REG_TP0_CHICKEN, "RTP0CHCK"},
{REG_CP_TIMESTAMP, "CP_TM_ST"},
};
static const struct pm_id_name pm3_types[] = {
{CP_COND_EXEC, "CND_EXEC"},
{CP_CONTEXT_UPDATE, "CX__UPDT"},
{CP_DRAW_INDX, "DRW_NDX_"},
{CP_DRAW_INDX_BIN, "DRW_NDXB"},
{CP_EVENT_WRITE, "EVENT_WT"},
{CP_IM_LOAD, "IN__LOAD"},
{CP_IM_LOAD_IMMEDIATE, "IM_LOADI"},
{CP_IM_STORE, "IM_STORE"},
{CP_INDIRECT_BUFFER, "IND_BUF_"},
{CP_INDIRECT_BUFFER_PFD, "IND_BUFP"},
{CP_INTERRUPT, "PM4_INTR"},
{CP_INVALIDATE_STATE, "INV_STAT"},
{CP_LOAD_CONSTANT_CONTEXT, "LD_CN_CX"},
{CP_ME_INIT, "ME__INIT"},
{CP_NOP, "PM4__NOP"},
{CP_REG_RMW, "REG__RMW"},
{CP_REG_TO_MEM, "REG2_MEM"},
{CP_SET_BIN_BASE_OFFSET, "ST_BIN_O"},
{CP_SET_CONSTANT, "ST_CONST"},
{CP_SET_PROTECTED_MODE, "ST_PRT_M"},
{CP_SET_SHADER_BASES, "ST_SHD_B"},
{CP_WAIT_FOR_IDLE, "WAIT4IDL"},
};
/* Offset address pairs: start, end of range to dump (inclusive) */
/* GPU < Z470 */
static const int a200_registers[] = {
0x0000, 0x0008, 0x0010, 0x002c, 0x00ec, 0x00f4,
0x0100, 0x0110, 0x0118, 0x011c,
0x0700, 0x0704, 0x070c, 0x0720, 0x0754, 0x0764,
0x0770, 0x0774, 0x07a8, 0x07a8, 0x07b8, 0x07cc,
0x07d8, 0x07dc, 0x07f0, 0x07fc, 0x0e44, 0x0e48,
0x0e6c, 0x0e78, 0x0ec8, 0x0ed4, 0x0edc, 0x0edc,
0x0fe0, 0x0fec, 0x1100, 0x1100,
0x110c, 0x1110, 0x112c, 0x112c, 0x1134, 0x113c,
0x1148, 0x1148, 0x1150, 0x116c, 0x11fc, 0x11fc,
0x15e0, 0x161c, 0x1724, 0x1724, 0x1740, 0x1740,
0x1804, 0x1810, 0x1818, 0x1824, 0x182c, 0x1838,
0x184c, 0x1850, 0x28a4, 0x28ac, 0x28bc, 0x28c4,
0x2900, 0x290c, 0x2914, 0x2914, 0x2938, 0x293c,
0x30b0, 0x30b0, 0x30c0, 0x30c0, 0x30e0, 0x30f0,
0x3100, 0x3100, 0x3110, 0x3110, 0x3200, 0x3218,
0x3220, 0x3250, 0x3264, 0x3268, 0x3290, 0x3294,
0x3400, 0x340c, 0x3418, 0x3418, 0x3420, 0x342c,
0x34d0, 0x34d4, 0x36b8, 0x3704, 0x3720, 0x3750,
0x3760, 0x3764, 0x3800, 0x3800, 0x3808, 0x3810,
0x385c, 0x3878, 0x3b00, 0x3b24, 0x3b2c, 0x3b30,
0x3b40, 0x3b40, 0x3b50, 0x3b5c, 0x3b80, 0x3b88,
0x3c04, 0x3c08, 0x3c30, 0x3c30, 0x3c38, 0x3c48,
0x3c98, 0x3ca8, 0x3cb0, 0x3cb0,
0x8000, 0x8008, 0x8018, 0x803c, 0x8200, 0x8208,
0x8400, 0x8424, 0x8430, 0x8450, 0x8600, 0x8610,
0x87d4, 0x87dc, 0x8800, 0x8820, 0x8a00, 0x8a0c,
0x8a4c, 0x8a50, 0x8c00, 0x8c20, 0x8c48, 0x8c48,
0x8c58, 0x8c74, 0x8c90, 0x8c98, 0x8e00, 0x8e0c,
0x9000, 0x9008, 0x9018, 0x903c, 0x9200, 0x9208,
0x9400, 0x9424, 0x9430, 0x9450, 0x9600, 0x9610,
0x97d4, 0x97dc, 0x9800, 0x9820, 0x9a00, 0x9a0c,
0x9a4c, 0x9a50, 0x9c00, 0x9c20, 0x9c48, 0x9c48,
0x9c58, 0x9c74, 0x9c90, 0x9c98, 0x9e00, 0x9e0c,
0x10000, 0x1000c, 0x12000, 0x12014,
0x12400, 0x12400, 0x12420, 0x12420
};
/* GPU = Z470 */
static const int a220_registers[] = {
0x0000, 0x0008, 0x0010, 0x002c, 0x00ec, 0x00f4,
0x0100, 0x0110, 0x0118, 0x011c,
0x0700, 0x0704, 0x070c, 0x0720, 0x0754, 0x0764,
0x0770, 0x0774, 0x07a8, 0x07a8, 0x07b8, 0x07cc,
0x07d8, 0x07dc, 0x07f0, 0x07fc, 0x0e44, 0x0e48,
0x0e6c, 0x0e78, 0x0ec8, 0x0ed4, 0x0edc, 0x0edc,
0x0fe0, 0x0fec, 0x1100, 0x1100,
0x110c, 0x1110, 0x112c, 0x112c, 0x1134, 0x113c,
0x1148, 0x1148, 0x1150, 0x116c, 0x11fc, 0x11fc,
0x15e0, 0x161c, 0x1724, 0x1724, 0x1740, 0x1740,
0x1804, 0x1810, 0x1818, 0x1824, 0x182c, 0x1838,
0x184c, 0x1850, 0x28a4, 0x28ac, 0x28bc, 0x28c4,
0x2900, 0x2900, 0x2908, 0x290c, 0x2914, 0x2914,
0x2938, 0x293c, 0x30c0, 0x30c0, 0x30e0, 0x30e4,
0x30f0, 0x30f0, 0x3200, 0x3204, 0x3220, 0x324c,
0x3400, 0x340c, 0x3414, 0x3418, 0x3420, 0x342c,
0x34d0, 0x34d4, 0x36b8, 0x3704, 0x3720, 0x3750,
0x3760, 0x3764, 0x3800, 0x3800, 0x3808, 0x3810,
0x385c, 0x3878, 0x3b00, 0x3b24, 0x3b2c, 0x3b30,
0x3b40, 0x3b40, 0x3b50, 0x3b5c, 0x3b80, 0x3b88,
0x3c04, 0x3c08, 0x8000, 0x8008, 0x8018, 0x803c,
0x8200, 0x8208, 0x8400, 0x8408, 0x8410, 0x8424,
0x8430, 0x8450, 0x8600, 0x8610, 0x87d4, 0x87dc,
0x8800, 0x8808, 0x8810, 0x8810, 0x8820, 0x8820,
0x8a00, 0x8a08, 0x8a50, 0x8a50,
0x8c00, 0x8c20, 0x8c24, 0x8c28, 0x8c48, 0x8c48,
0x8c58, 0x8c58, 0x8c60, 0x8c74, 0x8c90, 0x8c98,
0x8e00, 0x8e0c, 0x9000, 0x9008, 0x9018, 0x903c,
0x9200, 0x9208, 0x9400, 0x9408, 0x9410, 0x9424,
0x9430, 0x9450, 0x9600, 0x9610, 0x97d4, 0x97dc,
0x9800, 0x9808, 0x9810, 0x9818, 0x9820, 0x9820,
0x9a00, 0x9a08, 0x9a50, 0x9a50, 0x9c00, 0x9c20,
0x9c48, 0x9c48, 0x9c58, 0x9c58, 0x9c60, 0x9c74,
0x9c90, 0x9c98, 0x9e00, 0x9e0c,
0x10000, 0x1000c, 0x12000, 0x12014,
0x12400, 0x12400, 0x12420, 0x12420
};
static uint32_t adreno_is_pm4_len(uint32_t word)
{
if (word == INVALID_RB_CMD)
return 0;
return (word >> 16) & 0x3FFF;
}
static bool adreno_is_pm4_type(uint32_t word)
{
int i;
if (word == INVALID_RB_CMD)
return 1;
if (adreno_is_pm4_len(word) > 16)
return 0;
if ((word & (3<<30)) == CP_TYPE0_PKT) {
for (i = 0; i < ARRAY_SIZE(pm0_types); ++i) {
if ((word & 0x7FFF) == pm0_types[i].id)
return 1;
}
return 0;
}
if ((word & (3<<30)) == CP_TYPE3_PKT) {
for (i = 0; i < ARRAY_SIZE(pm3_types); ++i) {
if ((word & 0xFFFF) == (pm3_types[i].id << 8))
return 1;
}
return 0;
}
return 0;
}
static const char *adreno_pm4_name(uint32_t word)
{
int i;
if (word == INVALID_RB_CMD)
return "--------";
if ((word & (3<<30)) == CP_TYPE0_PKT) {
for (i = 0; i < ARRAY_SIZE(pm0_types); ++i) {
if ((word & 0x7FFF) == pm0_types[i].id)
return pm0_types[i].name;
}
return "????????";
}
if ((word & (3<<30)) == CP_TYPE3_PKT) {
for (i = 0; i < ARRAY_SIZE(pm3_types); ++i) {
if ((word & 0xFFFF) == (pm3_types[i].id << 8))
return pm3_types[i].name;
}
return "????????";
}
return "????????";
}
static void adreno_dump_regs(struct kgsl_device *device,
const int *registers, int size)
{
int range = 0, offset = 0;
for (range = 0; range < size; range++) {
/* start and end are in dword offsets */
int start = registers[range * 2] / 4;
int end = registers[range * 2 + 1] / 4;
unsigned char linebuf[32 * 3 + 2 + 32 + 1];
int linelen, i;
for (offset = start; offset <= end; offset += linelen) {
unsigned int regvals[32/4];
linelen = min(end+1-offset, 32/4);
for (i = 0; i < linelen; ++i)
kgsl_regread(device, offset+i, regvals+i);
hex_dump_to_buffer(regvals, linelen*4, 32, 4,
linebuf, sizeof(linebuf), 0);
KGSL_LOG_DUMP(device,
"REG: %5.5X: %s\n", offset<<2, linebuf);
}
}
}
static void dump_ib(struct kgsl_device *device, char* buffId, uint32_t pt_base,
uint32_t base_offset, uint32_t ib_base, uint32_t ib_size, bool dump)
{
unsigned int memsize;
uint8_t *base_addr = kgsl_sharedmem_convertaddr(device, pt_base,
ib_base, &memsize);
if (base_addr && dump)
print_hex_dump(KERN_ERR, buffId, DUMP_PREFIX_OFFSET,
32, 4, base_addr, ib_size*4, 0);
else
KGSL_LOG_DUMP(device, "%s base:%8.8X ib_size:%d "
"offset:%5.5X%s\n",
buffId, ib_base, ib_size*4, base_offset,
base_addr ? "" : " [Invalid]");
}
#define IB_LIST_SIZE 64
struct ib_list {
int count;
uint32_t bases[IB_LIST_SIZE];
uint32_t sizes[IB_LIST_SIZE];
uint32_t offsets[IB_LIST_SIZE];
};
static void dump_ib1(struct kgsl_device *device, uint32_t pt_base,
uint32_t base_offset,
uint32_t ib1_base, uint32_t ib1_size,
struct ib_list *ib_list, bool dump)
{
int i, j;
uint32_t value;
uint32_t *ib1_addr;
unsigned int memsize;
dump_ib(device, "IB1:", pt_base, base_offset, ib1_base,
ib1_size, dump);
/* fetch virtual address for given IB base */
ib1_addr = (uint32_t *)kgsl_sharedmem_convertaddr(device, pt_base,
ib1_base, &memsize);
if (!ib1_addr)
return;
for (i = 0; i+3 < ib1_size; ) {
value = ib1_addr[i++];
if (value == cp_type3_packet(CP_INDIRECT_BUFFER_PFD, 2)) {
uint32_t ib2_base = ib1_addr[i++];
uint32_t ib2_size = ib1_addr[i++];
/* find previous match */
for (j = 0; j < ib_list->count; ++j)
if (ib_list->sizes[j] == ib2_size
&& ib_list->bases[j] == ib2_base)
break;
if (j < ib_list->count || ib_list->count
>= IB_LIST_SIZE)
continue;
/* store match */
ib_list->sizes[ib_list->count] = ib2_size;
ib_list->bases[ib_list->count] = ib2_base;
ib_list->offsets[ib_list->count] = i<<2;
++ib_list->count;
}
}
}
static void adreno_dump_rb_buffer(const void *buf, size_t len,
char *linebuf, size_t linebuflen, int *argp)
{
const u32 *ptr4 = buf;
const int ngroups = len;
int lx = 0, j;
bool nxsp = 1;
for (j = 0; j < ngroups; j++) {
if (*argp < 0) {
lx += scnprintf(linebuf + lx, linebuflen - lx, " <");
*argp = -*argp;
} else if (nxsp)
lx += scnprintf(linebuf + lx, linebuflen - lx, " ");
else
nxsp = 1;
if (!*argp && adreno_is_pm4_type(ptr4[j])) {
lx += scnprintf(linebuf + lx, linebuflen - lx,
"%s", adreno_pm4_name(ptr4[j]));
*argp = -(adreno_is_pm4_len(ptr4[j])+1);
} else {
lx += scnprintf(linebuf + lx, linebuflen - lx,
"%8.8X", ptr4[j]);
if (*argp > 1)
--*argp;
else if (*argp == 1) {
*argp = 0;
nxsp = 0;
lx += scnprintf(linebuf + lx, linebuflen - lx,
"> ");
}
}
}
linebuf[lx] = '\0';
}
static bool adreno_rb_use_hex(void)
{
#ifdef CONFIG_MSM_KGSL_PSTMRTMDMP_RB_HEX
return 1;
#else
return 0;
#endif
}
static void adreno_dump_rb(struct kgsl_device *device, const void *buf,
size_t len, int start, int size)
{
const uint32_t *ptr = buf;
int i, remaining, args = 0;
unsigned char linebuf[32 * 3 + 2 + 32 + 1];
const int rowsize = 8;
len >>= 2;
remaining = len;
for (i = 0; i < len; i += rowsize) {
int linelen = min(remaining, rowsize);
remaining -= rowsize;
if (adreno_rb_use_hex())
hex_dump_to_buffer(ptr+i, linelen*4, rowsize*4, 4,
linebuf, sizeof(linebuf), 0);
else
adreno_dump_rb_buffer(ptr+i, linelen, linebuf,
sizeof(linebuf), &args);
KGSL_LOG_DUMP(device,
"RB: %4.4X:%s\n", (start+i)%size, linebuf);
}
}
static bool adreno_ib_dump_enabled(void)
{
#ifdef CONFIG_MSM_KGSL_PSTMRTMDMP_NO_IB_DUMP
return 0;
#else
return 1;
#endif
}
struct log_field {
bool show;
const char *display;
};
static int adreno_dump_fields_line(struct kgsl_device *device,
const char *start, char *str, int slen,
const struct log_field **lines,
int num)
{
const struct log_field *l = *lines;
int sptr, count = 0;
sptr = snprintf(str, slen, "%s", start);
for ( ; num && sptr < slen; num--, l++) {
int ilen = strlen(l->display);
if (!l->show)
continue;
if (count)
ilen += strlen(" | ");
if (ilen > (slen - sptr))
break;
if (count++)
sptr += snprintf(str + sptr, slen - sptr, " | ");
sptr += snprintf(str + sptr, slen - sptr, "%s", l->display);
}
KGSL_LOG_DUMP(device, "%s\n", str);
*lines = l;
return num;
}
static void adreno_dump_fields(struct kgsl_device *device,
const char *start, const struct log_field *lines,
int num)
{
char lb[90];
const char *sstr = start;
lb[sizeof(lb) - 1] = '\0';
while (num) {
int ret = adreno_dump_fields_line(device, sstr, lb,
sizeof(lb) - 1, &lines, num);
if (ret == num)
break;
num = ret;
sstr = " ";
}
}
static int adreno_dump(struct kgsl_device *device)
{
unsigned int r1, r2, r3, rbbm_status;
unsigned int cp_ib1_base, cp_ib1_bufsz, cp_stat;
unsigned int cp_ib2_base, cp_ib2_bufsz;
unsigned int pt_base, cur_pt_base;
unsigned int cp_rb_base, rb_count;
unsigned int cp_rb_wptr, cp_rb_rptr;
unsigned int i;
int result = 0;
uint32_t *rb_copy;
const uint32_t *rb_vaddr;
int num_item = 0;
int read_idx, write_idx;
unsigned int ts_processed, rb_memsize;
static struct ib_list ib_list;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
mb();
KGSL_LOG_DUMP(device, "POWER: FLAGS = %08lX | ACTIVE POWERLEVEL = %08X",
pwr->power_flags, pwr->active_pwrlevel);
KGSL_LOG_DUMP(device, "POWER: INTERVAL TIMEOUT = %08X ",
pwr->interval_timeout);
KGSL_LOG_DUMP(device, "GRP_CLK = %lu ",
kgsl_get_clkrate(pwr->grp_clks[0]));
KGSL_LOG_DUMP(device, "BUS CLK = %lu ",
kgsl_get_clkrate(pwr->ebi1_clk));
kgsl_regread(device, REG_RBBM_STATUS, &rbbm_status);
kgsl_regread(device, REG_RBBM_PM_OVERRIDE1, &r2);
kgsl_regread(device, REG_RBBM_PM_OVERRIDE2, &r3);
KGSL_LOG_DUMP(device, "RBBM: STATUS = %08X | PM_OVERRIDE1 = %08X | "
"PM_OVERRIDE2 = %08X\n", rbbm_status, r2, r3);
kgsl_regread(device, REG_RBBM_INT_CNTL, &r1);
kgsl_regread(device, REG_RBBM_INT_STATUS, &r2);
kgsl_regread(device, REG_RBBM_READ_ERROR, &r3);
KGSL_LOG_DUMP(device, " INT_CNTL = %08X | INT_STATUS = %08X | "
"READ_ERROR = %08X\n", r1, r2, r3);
{
char cmdFifo[16];
struct log_field lines[] = {
{rbbm_status & 0x001F, cmdFifo},
{rbbm_status & BIT(5), "TC busy "},
{rbbm_status & BIT(8), "HIRQ pending"},
{rbbm_status & BIT(9), "CPRQ pending"},
{rbbm_status & BIT(10), "CFRQ pending"},
{rbbm_status & BIT(11), "PFRQ pending"},
{rbbm_status & BIT(12), "VGT 0DMA bsy"},
{rbbm_status & BIT(14), "RBBM WU busy"},
{rbbm_status & BIT(16), "CP NRT busy "},
{rbbm_status & BIT(18), "MH busy "},
{rbbm_status & BIT(19), "MH chncy bsy"},
{rbbm_status & BIT(21), "SX busy "},
{rbbm_status & BIT(22), "TPC busy "},
{rbbm_status & BIT(24), "SC CNTX busy"},
{rbbm_status & BIT(25), "PA busy "},
{rbbm_status & BIT(26), "VGT busy "},
{rbbm_status & BIT(27), "SQ cntx1 bsy"},
{rbbm_status & BIT(28), "SQ cntx0 bsy"},
{rbbm_status & BIT(30), "RB busy "},
{rbbm_status & BIT(31), "Grphs pp bsy"},
};
snprintf(cmdFifo, sizeof(cmdFifo), "CMD FIFO=%01X ",
rbbm_status & 0xf);
adreno_dump_fields(device, " STATUS=", lines,
ARRAY_SIZE(lines));
}
kgsl_regread(device, REG_CP_RB_BASE, &cp_rb_base);
kgsl_regread(device, REG_CP_RB_CNTL, &r2);
rb_count = 2 << (r2 & (BIT(6)-1));
kgsl_regread(device, REG_CP_RB_RPTR_ADDR, &r3);
KGSL_LOG_DUMP(device,
"CP_RB: BASE = %08X | CNTL = %08X | RPTR_ADDR = %08X"
" | rb_count = %08X\n", cp_rb_base, r2, r3, rb_count);
{
struct adreno_ringbuffer *rb = &adreno_dev->ringbuffer;
if (rb->sizedwords != rb_count)
rb_count = rb->sizedwords;
}
kgsl_regread(device, REG_CP_RB_RPTR, &cp_rb_rptr);
kgsl_regread(device, REG_CP_RB_WPTR, &cp_rb_wptr);
kgsl_regread(device, REG_CP_RB_RPTR_WR, &r3);
KGSL_LOG_DUMP(device,
" RPTR = %08X | WPTR = %08X | RPTR_WR = %08X"
"\n", cp_rb_rptr, cp_rb_wptr, r3);
kgsl_regread(device, REG_CP_IB1_BASE, &cp_ib1_base);
kgsl_regread(device, REG_CP_IB1_BUFSZ, &cp_ib1_bufsz);
KGSL_LOG_DUMP(device,
"CP_IB1: BASE = %08X | BUFSZ = %d\n", cp_ib1_base,
cp_ib1_bufsz);
kgsl_regread(device, REG_CP_IB2_BASE, &cp_ib2_base);
kgsl_regread(device, REG_CP_IB2_BUFSZ, &cp_ib2_bufsz);
KGSL_LOG_DUMP(device,
"CP_IB2: BASE = %08X | BUFSZ = %d\n", cp_ib2_base,
cp_ib2_bufsz);
kgsl_regread(device, REG_CP_INT_CNTL, &r1);
kgsl_regread(device, REG_CP_INT_STATUS, &r2);
KGSL_LOG_DUMP(device, "CP_INT: CNTL = %08X | STATUS = %08X\n", r1, r2);
kgsl_regread(device, REG_CP_ME_CNTL, &r1);
kgsl_regread(device, REG_CP_ME_STATUS, &r2);
kgsl_regread(device, REG_MASTER_INT_SIGNAL, &r3);
KGSL_LOG_DUMP(device,
"CP_ME: CNTL = %08X | STATUS = %08X | MSTR_INT_SGNL = "
"%08X\n", r1, r2, r3);
kgsl_regread(device, REG_CP_STAT, &cp_stat);
KGSL_LOG_DUMP(device, "CP_STAT = %08X\n", cp_stat);
#ifndef CONFIG_MSM_KGSL_PSTMRTMDMP_CP_STAT_NO_DETAIL
{
struct log_field lns[] = {
{cp_stat & BIT(0), "WR_BSY 0"},
{cp_stat & BIT(1), "RD_RQ_BSY 1"},
{cp_stat & BIT(2), "RD_RTN_BSY 2"},
};
adreno_dump_fields(device, " MIU=", lns, ARRAY_SIZE(lns));
}
{
struct log_field lns[] = {
{cp_stat & BIT(5), "RING_BUSY 5"},
{cp_stat & BIT(6), "NDRCTS_BSY 6"},
{cp_stat & BIT(7), "NDRCT2_BSY 7"},
{cp_stat & BIT(9), "ST_BUSY 9"},
{cp_stat & BIT(10), "BUSY 10"},
};
adreno_dump_fields(device, " CSF=", lns, ARRAY_SIZE(lns));
}
{
struct log_field lns[] = {
{cp_stat & BIT(11), "RNG_Q_BSY 11"},
{cp_stat & BIT(12), "NDRCTS_Q_B12"},
{cp_stat & BIT(13), "NDRCT2_Q_B13"},
{cp_stat & BIT(16), "ST_QUEUE_B16"},
{cp_stat & BIT(17), "PFP_BUSY 17"},
};
adreno_dump_fields(device, " RING=", lns, ARRAY_SIZE(lns));
}
{
struct log_field lns[] = {
{cp_stat & BIT(3), "RBIU_BUSY 3"},
{cp_stat & BIT(4), "RCIU_BUSY 4"},
{cp_stat & BIT(18), "MQ_RG_BSY 18"},
{cp_stat & BIT(19), "MQ_NDRS_BS19"},
{cp_stat & BIT(20), "MQ_NDR2_BS20"},
{cp_stat & BIT(21), "MIU_WC_STL21"},
{cp_stat & BIT(22), "CP_NRT_BSY22"},
{cp_stat & BIT(23), "3D_BUSY 23"},
{cp_stat & BIT(26), "ME_BUSY 26"},
{cp_stat & BIT(29), "ME_WC_BSY 29"},
{cp_stat & BIT(30), "MIU_FF EM 30"},
{cp_stat & BIT(31), "CP_BUSY 31"},
};
adreno_dump_fields(device, " CP_STT=", lns, ARRAY_SIZE(lns));
}
#endif
kgsl_regread(device, REG_SCRATCH_REG0, &r1);
KGSL_LOG_DUMP(device, "SCRATCH_REG0 = %08X\n", r1);
kgsl_regread(device, REG_COHER_SIZE_PM4, &r1);
kgsl_regread(device, REG_COHER_BASE_PM4, &r2);
kgsl_regread(device, REG_COHER_STATUS_PM4, &r3);
KGSL_LOG_DUMP(device,
"COHER: SIZE_PM4 = %08X | BASE_PM4 = %08X | STATUS_PM4"
" = %08X\n", r1, r2, r3);
kgsl_regread(device, MH_AXI_ERROR, &r1);
KGSL_LOG_DUMP(device, "MH: AXI_ERROR = %08X\n", r1);
kgsl_regread(device, MH_MMU_PAGE_FAULT, &r1);
kgsl_regread(device, MH_MMU_CONFIG, &r2);
kgsl_regread(device, MH_MMU_MPU_BASE, &r3);
KGSL_LOG_DUMP(device,
"MH_MMU: PAGE_FAULT = %08X | CONFIG = %08X | MPU_BASE ="
" %08X\n", r1, r2, r3);
kgsl_regread(device, MH_MMU_MPU_END, &r1);
kgsl_regread(device, MH_MMU_VA_RANGE, &r2);
pt_base = kgsl_mmu_get_current_ptbase(device);
KGSL_LOG_DUMP(device,
" MPU_END = %08X | VA_RANGE = %08X | PT_BASE ="
" %08X\n", r1, r2, pt_base);
cur_pt_base = pt_base;
KGSL_LOG_DUMP(device, "PAGETABLE SIZE: %08X ", KGSL_PAGETABLE_SIZE);
kgsl_regread(device, MH_MMU_TRAN_ERROR, &r1);
KGSL_LOG_DUMP(device, " TRAN_ERROR = %08X\n", r1);
kgsl_regread(device, MH_INTERRUPT_MASK, &r1);
kgsl_regread(device, MH_INTERRUPT_STATUS, &r2);
KGSL_LOG_DUMP(device,
"MH_INTERRUPT: MASK = %08X | STATUS = %08X\n", r1, r2);
ts_processed = device->ftbl->readtimestamp(device,
KGSL_TIMESTAMP_RETIRED);
KGSL_LOG_DUMP(device, "TIMESTM RTRD: %08X\n", ts_processed);
num_item = adreno_ringbuffer_count(&adreno_dev->ringbuffer,
cp_rb_rptr);
if (num_item <= 0)
KGSL_LOG_POSTMORTEM_WRITE(device, "Ringbuffer is Empty.\n");
rb_copy = vmalloc(rb_count<<2);
if (!rb_copy) {
KGSL_LOG_POSTMORTEM_WRITE(device,
"vmalloc(%d) failed\n", rb_count << 2);
result = -ENOMEM;
goto end;
}
KGSL_LOG_DUMP(device, "RB: rd_addr:%8.8x rb_size:%d num_item:%d\n",
cp_rb_base, rb_count<<2, num_item);
rb_vaddr = (const uint32_t *)kgsl_sharedmem_convertaddr(device,
cur_pt_base, cp_rb_base, &rb_memsize);
if (!rb_vaddr) {
KGSL_LOG_POSTMORTEM_WRITE(device,
"Can't fetch vaddr for CP_RB_BASE\n");
goto error_vfree;
}
read_idx = (int)cp_rb_rptr - NUM_DWORDS_OF_RINGBUFFER_HISTORY;
if (read_idx < 0)
read_idx += rb_count;
write_idx = (int)cp_rb_wptr + 16;
if (write_idx > rb_count)
write_idx -= rb_count;
num_item += NUM_DWORDS_OF_RINGBUFFER_HISTORY+16;
if (num_item > rb_count)
num_item = rb_count;
if (write_idx >= read_idx)
memcpy(rb_copy, rb_vaddr+read_idx, num_item<<2);
else {
int part1_c = rb_count-read_idx;
memcpy(rb_copy, rb_vaddr+read_idx, part1_c<<2);
memcpy(rb_copy+part1_c, rb_vaddr, (num_item-part1_c)<<2);
}
/* extract the latest ib commands from the buffer */
ib_list.count = 0;
i = 0;
for (read_idx = 0; read_idx < num_item; ) {
uint32_t this_cmd = rb_copy[read_idx++];
if (this_cmd == cp_type3_packet(CP_INDIRECT_BUFFER_PFD, 2)) {
uint32_t ib_addr = rb_copy[read_idx++];
uint32_t ib_size = rb_copy[read_idx++];
dump_ib1(device, cur_pt_base, (read_idx-3)<<2, ib_addr,
ib_size, &ib_list, 0);
for (; i < ib_list.count; ++i)
dump_ib(device, "IB2:", cur_pt_base,
ib_list.offsets[i],
ib_list.bases[i],
ib_list.sizes[i], 0);
} else if (this_cmd == cp_type0_packet(MH_MMU_PT_BASE, 1)) {
/* Set cur_pt_base to the new pagetable base */
cur_pt_base = rb_copy[read_idx++];
}
}
/* Restore cur_pt_base back to the pt_base of
the process in whose context the GPU hung */
cur_pt_base = pt_base;
read_idx = (int)cp_rb_rptr - NUM_DWORDS_OF_RINGBUFFER_HISTORY;
if (read_idx < 0)
read_idx += rb_count;
KGSL_LOG_DUMP(device,
"RB: addr=%8.8x window:%4.4x-%4.4x, start:%4.4x\n",
cp_rb_base, cp_rb_rptr, cp_rb_wptr, read_idx);
adreno_dump_rb(device, rb_copy, num_item<<2, read_idx, rb_count);
if (adreno_ib_dump_enabled()) {
for (read_idx = NUM_DWORDS_OF_RINGBUFFER_HISTORY;
read_idx >= 0; --read_idx) {
uint32_t this_cmd = rb_copy[read_idx];
if (this_cmd == cp_type3_packet(
CP_INDIRECT_BUFFER_PFD, 2)) {
uint32_t ib_addr = rb_copy[read_idx+1];
uint32_t ib_size = rb_copy[read_idx+2];
if (ib_size && cp_ib1_base == ib_addr) {
KGSL_LOG_DUMP(device,
"IB1: base:%8.8X "
"count:%d\n", ib_addr, ib_size);
dump_ib(device, "IB1: ", cur_pt_base,
read_idx<<2, ib_addr, ib_size,
1);
}
}
}
for (i = 0; i < ib_list.count; ++i) {
uint32_t ib_size = ib_list.sizes[i];
uint32_t ib_offset = ib_list.offsets[i];
if (ib_size && cp_ib2_base == ib_list.bases[i]) {
KGSL_LOG_DUMP(device,
"IB2: base:%8.8X count:%d\n",
cp_ib2_base, ib_size);
dump_ib(device, "IB2: ", cur_pt_base, ib_offset,
ib_list.bases[i], ib_size, 1);
}
}
}
/* Dump the registers if the user asked for it */
if (adreno_is_a20x(adreno_dev))
adreno_dump_regs(device, a200_registers,
ARRAY_SIZE(a200_registers) / 2);
else if (adreno_is_a22x(adreno_dev))
adreno_dump_regs(device, a220_registers,
ARRAY_SIZE(a220_registers) / 2);
error_vfree:
vfree(rb_copy);
end:
return result;
}
/**
* adreno_postmortem_dump - Dump the current GPU state
* @device - A pointer to the KGSL device to dump
* @manual - A flag that indicates if this was a manually triggered
* dump (from debugfs). If zero, then this is assumed to be a
* dump automaticlaly triggered from a hang
*/
int adreno_postmortem_dump(struct kgsl_device *device, int manual)
{
bool saved_nap;
BUG_ON(device == NULL);
kgsl_cffdump_hang(device->id);
/* For a manual dump, make sure that the system is idle */
if (manual) {
if (device->active_cnt != 0) {
mutex_unlock(&device->mutex);
wait_for_completion(&device->suspend_gate);
mutex_lock(&device->mutex);
}
if (device->state == KGSL_STATE_ACTIVE)
kgsl_idle(device, KGSL_TIMEOUT_DEFAULT);
}
/* Disable the idle timer so we don't get interrupted */
del_timer_sync(&device->idle_timer);
/* Turn off napping to make sure we have the clocks full
attention through the following process */
saved_nap = device->pwrctrl.nap_allowed;
device->pwrctrl.nap_allowed = false;
/* Force on the clocks */
kgsl_pwrctrl_wake(device);
/* Disable the irq */
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
/* If this is not a manual trigger, then set up the
state to try to recover */
if (!manual) {
device->state = KGSL_STATE_DUMP_AND_RECOVER;
KGSL_PWR_WARN(device,
"state -> DUMP_AND_RECOVER, device %d\n",
device->id);
}
KGSL_DRV_ERR(device,
"wait for work in workqueue to complete\n");
mutex_unlock(&device->mutex);
flush_workqueue(device->work_queue);
mutex_lock(&device->mutex);
adreno_dump(device);
/* Restore nap mode */
device->pwrctrl.nap_allowed = saved_nap;
/* On a manual trigger, turn on the interrupts and put
the clocks to sleep. They will recover themselves
on the next event. For a hang, leave things as they
are until recovery kicks in. */
if (manual) {
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_ON);
/* try to go into a sleep mode until the next event */
device->requested_state = KGSL_STATE_SLEEP;
kgsl_pwrctrl_sleep(device);
}
KGSL_DRV_ERR(device, "Dump Finished\n");
return 0;
}

View File

@ -1,21 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_POSTMORTEM_H
#define __ADRENO_POSTMORTEM_H
struct kgsl_device;
int adreno_postmortem_dump(struct kgsl_device *device, int manual);
#endif /* __ADRENO_POSTMORTEM_H */

View File

@ -1,812 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/firmware.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/log2.h>
#include "kgsl.h"
#include "kgsl_sharedmem.h"
#include "kgsl_cffdump.h"
#include "adreno.h"
#include "adreno_pm4types.h"
#include "adreno_ringbuffer.h"
#include "a2xx_reg.h"
#define GSL_RB_NOP_SIZEDWORDS 2
/* protected mode error checking below register address 0x800
* note: if CP_INTERRUPT packet is used then checking needs
* to change to below register address 0x7C8
*/
#define GSL_RB_PROTECTED_MODE_CONTROL 0x200001F2
/* Firmware file names
* Legacy names must remain but replacing macro names to
* match current kgsl model.
* a200 is yamato
* a220 is leia
*/
#define A200_PFP_FW "yamato_pfp.fw"
#define A200_PM4_FW "yamato_pm4.fw"
#define A220_PFP_470_FW "leia_pfp_470.fw"
#define A220_PM4_470_FW "leia_pm4_470.fw"
#define A225_PFP_FW "a225_pfp.fw"
#define A225_PM4_FW "a225_pm4.fw"
static void adreno_ringbuffer_submit(struct adreno_ringbuffer *rb)
{
BUG_ON(rb->wptr == 0);
/* Let the pwrscale policy know that new commands have
been submitted. */
kgsl_pwrscale_busy(rb->device);
/*synchronize memory before informing the hardware of the
*new commands.
*/
mb();
adreno_regwrite(rb->device, REG_CP_RB_WPTR, rb->wptr);
}
static void
adreno_ringbuffer_waitspace(struct adreno_ringbuffer *rb, unsigned int numcmds,
int wptr_ahead)
{
int nopcount;
unsigned int freecmds;
unsigned int *cmds;
uint cmds_gpu;
/* if wptr ahead, fill the remaining with NOPs */
if (wptr_ahead) {
/* -1 for header */
nopcount = rb->sizedwords - rb->wptr - 1;
cmds = (unsigned int *)rb->buffer_desc.hostptr + rb->wptr;
cmds_gpu = rb->buffer_desc.gpuaddr + sizeof(uint)*rb->wptr;
GSL_RB_WRITE(cmds, cmds_gpu, cp_nop_packet(nopcount));
/* Make sure that rptr is not 0 before submitting
* commands at the end of ringbuffer. We do not
* want the rptr and wptr to become equal when
* the ringbuffer is not empty */
do {
GSL_RB_GET_READPTR(rb, &rb->rptr);
} while (!rb->rptr);
rb->wptr++;
adreno_ringbuffer_submit(rb);
rb->wptr = 0;
}
/* wait for space in ringbuffer */
do {
GSL_RB_GET_READPTR(rb, &rb->rptr);
freecmds = rb->rptr - rb->wptr;
} while ((freecmds != 0) && (freecmds <= numcmds));
}
static unsigned int *adreno_ringbuffer_allocspace(struct adreno_ringbuffer *rb,
unsigned int numcmds)
{
unsigned int *ptr = NULL;
BUG_ON(numcmds >= rb->sizedwords);
GSL_RB_GET_READPTR(rb, &rb->rptr);
/* check for available space */
if (rb->wptr >= rb->rptr) {
/* wptr ahead or equal to rptr */
/* reserve dwords for nop packet */
if ((rb->wptr + numcmds) > (rb->sizedwords -
GSL_RB_NOP_SIZEDWORDS))
adreno_ringbuffer_waitspace(rb, numcmds, 1);
} else {
/* wptr behind rptr */
if ((rb->wptr + numcmds) >= rb->rptr)
adreno_ringbuffer_waitspace(rb, numcmds, 0);
/* check for remaining space */
/* reserve dwords for nop packet */
if ((rb->wptr + numcmds) > (rb->sizedwords -
GSL_RB_NOP_SIZEDWORDS))
adreno_ringbuffer_waitspace(rb, numcmds, 1);
}
ptr = (unsigned int *)rb->buffer_desc.hostptr + rb->wptr;
rb->wptr += numcmds;
return ptr;
}
static int _load_firmware(struct kgsl_device *device, const char *fwfile,
void **data, int *len)
{
const struct firmware *fw = NULL;
int ret;
ret = request_firmware(&fw, fwfile, device->dev);
if (ret) {
KGSL_DRV_ERR(device, "request_firmware(%s) failed: %d\n",
fwfile, ret);
return ret;
}
*data = kmalloc(fw->size, GFP_KERNEL);
if (*data) {
memcpy(*data, fw->data, fw->size);
*len = fw->size;
} else
KGSL_MEM_ERR(device, "kmalloc(%d) failed\n", fw->size);
release_firmware(fw);
return (*data != NULL) ? 0 : -ENOMEM;
}
static int adreno_ringbuffer_load_pm4_ucode(struct kgsl_device *device)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
int i, ret = 0;
if (adreno_dev->pm4_fw == NULL) {
int len;
void *ptr;
ret = _load_firmware(device, adreno_dev->pm4_fwfile,
&ptr, &len);
if (ret)
goto err;
/* PM4 size is 3 dword aligned plus 1 dword of version */
if (len % ((sizeof(uint32_t) * 3)) != sizeof(uint32_t)) {
KGSL_DRV_ERR(device, "Bad firmware size: %d\n", len);
ret = -EINVAL;
kfree(ptr);
goto err;
}
adreno_dev->pm4_fw_size = len / sizeof(uint32_t);
adreno_dev->pm4_fw = ptr;
}
KGSL_DRV_INFO(device, "loading pm4 ucode version: %d\n",
adreno_dev->pm4_fw[0]);
adreno_regwrite(device, REG_CP_DEBUG, 0x02000000);
adreno_regwrite(device, REG_CP_ME_RAM_WADDR, 0);
for (i = 1; i < adreno_dev->pm4_fw_size; i++)
adreno_regwrite(device, REG_CP_ME_RAM_DATA,
adreno_dev->pm4_fw[i]);
err:
return ret;
}
static int adreno_ringbuffer_load_pfp_ucode(struct kgsl_device *device)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
int i, ret = 0;
if (adreno_dev->pfp_fw == NULL) {
int len;
void *ptr;
ret = _load_firmware(device, adreno_dev->pfp_fwfile,
&ptr, &len);
if (ret)
goto err;
/* PFP size shold be dword aligned */
if (len % sizeof(uint32_t) != 0) {
KGSL_DRV_ERR(device, "Bad firmware size: %d\n", len);
ret = -EINVAL;
kfree(ptr);
goto err;
}
adreno_dev->pfp_fw_size = len / sizeof(uint32_t);
adreno_dev->pfp_fw = ptr;
}
KGSL_DRV_INFO(device, "loading pfp ucode version: %d\n",
adreno_dev->pfp_fw[0]);
adreno_regwrite(device, REG_CP_PFP_UCODE_ADDR, 0);
for (i = 1; i < adreno_dev->pfp_fw_size; i++)
adreno_regwrite(device, REG_CP_PFP_UCODE_DATA,
adreno_dev->pfp_fw[i]);
err:
return ret;
}
int adreno_ringbuffer_start(struct adreno_ringbuffer *rb, unsigned int init_ram)
{
int status;
/*cp_rb_cntl_u cp_rb_cntl; */
union reg_cp_rb_cntl cp_rb_cntl;
unsigned int *cmds, rb_cntl;
struct kgsl_device *device = rb->device;
uint cmds_gpu;
if (rb->flags & KGSL_FLAGS_STARTED)
return 0;
if (init_ram) {
rb->timestamp = 0;
GSL_RB_INIT_TIMESTAMP(rb);
}
kgsl_sharedmem_set(&rb->memptrs_desc, 0, 0,
sizeof(struct kgsl_rbmemptrs));
kgsl_sharedmem_set(&rb->buffer_desc, 0, 0xAA,
(rb->sizedwords << 2));
adreno_regwrite(device, REG_CP_RB_WPTR_BASE,
(rb->memptrs_desc.gpuaddr
+ GSL_RB_MEMPTRS_WPTRPOLL_OFFSET));
/* setup WPTR delay */
adreno_regwrite(device, REG_CP_RB_WPTR_DELAY, 0 /*0x70000010 */);
/*setup REG_CP_RB_CNTL */
adreno_regread(device, REG_CP_RB_CNTL, &rb_cntl);
cp_rb_cntl.val = rb_cntl;
/*
* The size of the ringbuffer in the hardware is the log2
* representation of the size in quadwords (sizedwords / 2)
*/
cp_rb_cntl.f.rb_bufsz = ilog2(rb->sizedwords >> 1);
/*
* Specify the quadwords to read before updating mem RPTR.
* Like above, pass the log2 representation of the blocksize
* in quadwords.
*/
cp_rb_cntl.f.rb_blksz = ilog2(KGSL_RB_BLKSIZE >> 3);
cp_rb_cntl.f.rb_poll_en = GSL_RB_CNTL_POLL_EN; /* WPTR polling */
/* mem RPTR writebacks */
cp_rb_cntl.f.rb_no_update = GSL_RB_CNTL_NO_UPDATE;
adreno_regwrite(device, REG_CP_RB_CNTL, cp_rb_cntl.val);
adreno_regwrite(device, REG_CP_RB_BASE, rb->buffer_desc.gpuaddr);
adreno_regwrite(device, REG_CP_RB_RPTR_ADDR,
rb->memptrs_desc.gpuaddr +
GSL_RB_MEMPTRS_RPTR_OFFSET);
/* explicitly clear all cp interrupts */
adreno_regwrite(device, REG_CP_INT_ACK, 0xFFFFFFFF);
/* setup scratch/timestamp */
adreno_regwrite(device, REG_SCRATCH_ADDR,
device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(soptimestamp));
adreno_regwrite(device, REG_SCRATCH_UMSK,
GSL_RB_MEMPTRS_SCRATCH_MASK);
/* load the CP ucode */
status = adreno_ringbuffer_load_pm4_ucode(device);
if (status != 0)
return status;
/* load the prefetch parser ucode */
status = adreno_ringbuffer_load_pfp_ucode(device);
if (status != 0)
return status;
adreno_regwrite(device, REG_CP_QUEUE_THRESHOLDS, 0x000C0804);
rb->rptr = 0;
rb->wptr = 0;
/* clear ME_HALT to start micro engine */
adreno_regwrite(device, REG_CP_ME_CNTL, 0);
/* ME_INIT */
cmds = adreno_ringbuffer_allocspace(rb, 19);
cmds_gpu = rb->buffer_desc.gpuaddr + sizeof(uint)*(rb->wptr-19);
GSL_RB_WRITE(cmds, cmds_gpu, CP_HDR_ME_INIT);
/* All fields present (bits 9:0) */
GSL_RB_WRITE(cmds, cmds_gpu, 0x000003ff);
/* Disable/Enable Real-Time Stream processing (present but ignored) */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
/* Enable (2D <-> 3D) implicit synchronization (present but ignored) */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_RB_SURFACE_INFO));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_PA_SC_WINDOW_OFFSET));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_VGT_MAX_VTX_INDX));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_SQ_PROGRAM_CNTL));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_RB_DEPTHCONTROL));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_PA_SU_POINT_SIZE));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_PA_SC_LINE_CNTL));
GSL_RB_WRITE(cmds, cmds_gpu,
SUBBLOCK_OFFSET(REG_PA_SU_POLY_OFFSET_FRONT_SCALE));
/* Vertex and Pixel Shader Start Addresses in instructions
* (3 DWORDS per instruction) */
GSL_RB_WRITE(cmds, cmds_gpu, 0x80000180);
/* Maximum Contexts */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000001);
/* Write Confirm Interval and The CP will wait the
* wait_interval * 16 clocks between polling */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
/* NQ and External Memory Swap */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
/* Protected mode error checking */
GSL_RB_WRITE(cmds, cmds_gpu, GSL_RB_PROTECTED_MODE_CONTROL);
/* Disable header dumping and Header dump address */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
/* Header dump size */
GSL_RB_WRITE(cmds, cmds_gpu, 0x00000000);
adreno_ringbuffer_submit(rb);
/* idle device to validate ME INIT */
status = adreno_idle(device, KGSL_TIMEOUT_DEFAULT);
if (status == 0)
rb->flags |= KGSL_FLAGS_STARTED;
return status;
}
void adreno_ringbuffer_stop(struct adreno_ringbuffer *rb)
{
if (rb->flags & KGSL_FLAGS_STARTED) {
/* ME_HALT */
adreno_regwrite(rb->device, REG_CP_ME_CNTL, 0x10000000);
rb->flags &= ~KGSL_FLAGS_STARTED;
}
}
int adreno_ringbuffer_init(struct kgsl_device *device)
{
int status;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct adreno_ringbuffer *rb = &adreno_dev->ringbuffer;
rb->device = device;
/*
* It is silly to convert this to words and then back to bytes
* immediately below, but most of the rest of the code deals
* in words, so we might as well only do the math once
*/
rb->sizedwords = KGSL_RB_SIZE >> 2;
/* allocate memory for ringbuffer */
status = kgsl_allocate_contiguous(&rb->buffer_desc,
(rb->sizedwords << 2));
if (status != 0) {
adreno_ringbuffer_close(rb);
return status;
}
/* allocate memory for polling and timestamps */
/* This really can be at 4 byte alignment boundry but for using MMU
* we need to make it at page boundary */
status = kgsl_allocate_contiguous(&rb->memptrs_desc,
sizeof(struct kgsl_rbmemptrs));
if (status != 0) {
adreno_ringbuffer_close(rb);
return status;
}
/* overlay structure on memptrs memory */
rb->memptrs = (struct kgsl_rbmemptrs *) rb->memptrs_desc.hostptr;
return 0;
}
void adreno_ringbuffer_close(struct adreno_ringbuffer *rb)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(rb->device);
kgsl_sharedmem_free(&rb->buffer_desc);
kgsl_sharedmem_free(&rb->memptrs_desc);
kfree(adreno_dev->pfp_fw);
kfree(adreno_dev->pm4_fw);
adreno_dev->pfp_fw = NULL;
adreno_dev->pm4_fw = NULL;
memset(rb, 0, sizeof(struct adreno_ringbuffer));
}
static uint32_t
adreno_ringbuffer_addcmds(struct adreno_ringbuffer *rb,
unsigned int flags, unsigned int *cmds,
int sizedwords)
{
unsigned int *ringcmds;
unsigned int timestamp;
unsigned int total_sizedwords = sizedwords + 6;
unsigned int i;
unsigned int rcmd_gpu;
/* reserve space to temporarily turn off protected mode
* error checking if needed
*/
total_sizedwords += flags & KGSL_CMD_FLAGS_PMODE ? 4 : 0;
total_sizedwords += !(flags & KGSL_CMD_FLAGS_NO_TS_CMP) ? 7 : 0;
total_sizedwords += !(flags & KGSL_CMD_FLAGS_NOT_KERNEL_CMD) ? 2 : 0;
ringcmds = adreno_ringbuffer_allocspace(rb, total_sizedwords);
rcmd_gpu = rb->buffer_desc.gpuaddr
+ sizeof(uint)*(rb->wptr-total_sizedwords);
if (!(flags & KGSL_CMD_FLAGS_NOT_KERNEL_CMD)) {
GSL_RB_WRITE(ringcmds, rcmd_gpu, cp_nop_packet(1));
GSL_RB_WRITE(ringcmds, rcmd_gpu, KGSL_CMD_IDENTIFIER);
}
if (flags & KGSL_CMD_FLAGS_PMODE) {
/* disable protected mode error checking */
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_SET_PROTECTED_MODE, 1));
GSL_RB_WRITE(ringcmds, rcmd_gpu, 0);
}
for (i = 0; i < sizedwords; i++) {
GSL_RB_WRITE(ringcmds, rcmd_gpu, *cmds);
cmds++;
}
if (flags & KGSL_CMD_FLAGS_PMODE) {
/* re-enable protected mode error checking */
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_SET_PROTECTED_MODE, 1));
GSL_RB_WRITE(ringcmds, rcmd_gpu, 1);
}
rb->timestamp++;
timestamp = rb->timestamp;
/* start-of-pipeline and end-of-pipeline timestamps */
GSL_RB_WRITE(ringcmds, rcmd_gpu, cp_type0_packet(REG_CP_TIMESTAMP, 1));
GSL_RB_WRITE(ringcmds, rcmd_gpu, rb->timestamp);
GSL_RB_WRITE(ringcmds, rcmd_gpu, cp_type3_packet(CP_EVENT_WRITE, 3));
GSL_RB_WRITE(ringcmds, rcmd_gpu, CACHE_FLUSH_TS);
GSL_RB_WRITE(ringcmds, rcmd_gpu,
(rb->device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(eoptimestamp)));
GSL_RB_WRITE(ringcmds, rcmd_gpu, rb->timestamp);
if (!(flags & KGSL_CMD_FLAGS_NO_TS_CMP)) {
/* Conditional execution based on memory values */
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_COND_EXEC, 4));
GSL_RB_WRITE(ringcmds, rcmd_gpu, (rb->device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(ts_cmp_enable)) >> 2);
GSL_RB_WRITE(ringcmds, rcmd_gpu, (rb->device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(ref_wait_ts)) >> 2);
GSL_RB_WRITE(ringcmds, rcmd_gpu, rb->timestamp);
/* # of conditional command DWORDs */
GSL_RB_WRITE(ringcmds, rcmd_gpu, 2);
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_INTERRUPT, 1));
GSL_RB_WRITE(ringcmds, rcmd_gpu, CP_INT_CNTL__RB_INT_MASK);
}
adreno_ringbuffer_submit(rb);
/* return timestamp of issued coREG_ands */
return timestamp;
}
void
adreno_ringbuffer_issuecmds(struct kgsl_device *device,
unsigned int flags,
unsigned int *cmds,
int sizedwords)
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct adreno_ringbuffer *rb = &adreno_dev->ringbuffer;
if (device->state & KGSL_STATE_HUNG)
return;
adreno_ringbuffer_addcmds(rb, flags, cmds, sizedwords);
}
int
adreno_ringbuffer_issueibcmds(struct kgsl_device_private *dev_priv,
struct kgsl_context *context,
struct kgsl_ibdesc *ibdesc,
unsigned int numibs,
uint32_t *timestamp,
unsigned int flags)
{
struct kgsl_device *device = dev_priv->device;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
unsigned int *link;
unsigned int *cmds;
unsigned int i;
struct adreno_context *drawctxt;
if (device->state & KGSL_STATE_HUNG)
return -EBUSY;
if (!(adreno_dev->ringbuffer.flags & KGSL_FLAGS_STARTED) ||
context == NULL || ibdesc == 0 || numibs == 0)
return -EINVAL;
drawctxt = context->devctxt;
if (drawctxt->flags & CTXT_FLAGS_GPU_HANG) {
KGSL_CTXT_WARN(device, "Context %p caused a gpu hang.."
" will not accept commands for this context\n",
drawctxt);
return -EDEADLK;
}
link = kzalloc(sizeof(unsigned int) * numibs * 3, GFP_KERNEL);
cmds = link;
if (!link) {
KGSL_MEM_ERR(device, "Failed to allocate memory for for command"
" submission, size %x\n", numibs * 3);
return -ENOMEM;
}
for (i = 0; i < numibs; i++) {
(void)kgsl_cffdump_parse_ibs(dev_priv, NULL,
ibdesc[i].gpuaddr, ibdesc[i].sizedwords, false);
*cmds++ = CP_HDR_INDIRECT_BUFFER_PFD;
*cmds++ = ibdesc[i].gpuaddr;
*cmds++ = ibdesc[i].sizedwords;
}
kgsl_setstate(device,
kgsl_mmu_pt_get_flags(device->mmu.hwpagetable,
device->id));
adreno_drawctxt_switch(adreno_dev, drawctxt, flags);
*timestamp = adreno_ringbuffer_addcmds(&adreno_dev->ringbuffer,
KGSL_CMD_FLAGS_NOT_KERNEL_CMD,
&link[0], (cmds - link));
KGSL_CMD_INFO(device, "ctxt %d g %08x numibs %d ts %d\n",
context->id, (unsigned int)ibdesc, numibs, *timestamp);
kfree(link);
#ifdef CONFIG_MSM_KGSL_CFF_DUMP
/*
* insert wait for idle after every IB1
* this is conservative but works reliably and is ok
* even for performance simulations
*/
adreno_idle(device, KGSL_TIMEOUT_DEFAULT);
#endif
return 0;
}
int adreno_ringbuffer_extract(struct adreno_ringbuffer *rb,
unsigned int *temp_rb_buffer,
int *rb_size)
{
struct kgsl_device *device = rb->device;
unsigned int rb_rptr;
unsigned int retired_timestamp;
unsigned int temp_idx = 0;
unsigned int value;
unsigned int val1;
unsigned int val2;
unsigned int val3;
unsigned int copy_rb_contents = 0;
unsigned int cur_context;
unsigned int j;
GSL_RB_GET_READPTR(rb, &rb->rptr);
retired_timestamp = device->ftbl->readtimestamp(device,
KGSL_TIMESTAMP_RETIRED);
KGSL_DRV_ERR(device, "GPU successfully executed till ts: %x\n",
retired_timestamp);
/*
* We need to go back in history by 4 dwords from the current location
* of read pointer as 4 dwords are read to match the end of a command.
* Also, take care of wrap around when moving back
*/
if (rb->rptr >= 4)
rb_rptr = (rb->rptr - 4) * sizeof(unsigned int);
else
rb_rptr = rb->buffer_desc.size -
((4 - rb->rptr) * sizeof(unsigned int));
/* Read the rb contents going backwards to locate end of last
* sucessfully executed command */
while ((rb_rptr / sizeof(unsigned int)) != rb->wptr) {
kgsl_sharedmem_readl(&rb->buffer_desc, &value, rb_rptr);
if (value == retired_timestamp) {
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
kgsl_sharedmem_readl(&rb->buffer_desc, &val1, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
kgsl_sharedmem_readl(&rb->buffer_desc, &val2, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
kgsl_sharedmem_readl(&rb->buffer_desc, &val3, rb_rptr);
/* match the pattern found at the end of a command */
if ((val1 == 2 &&
val2 == cp_type3_packet(CP_INTERRUPT, 1)
&& val3 == CP_INT_CNTL__RB_INT_MASK) ||
(val1 == cp_type3_packet(CP_EVENT_WRITE, 3)
&& val2 == CACHE_FLUSH_TS &&
val3 == (rb->device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(eoptimestamp)))) {
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
KGSL_DRV_ERR(device,
"Found end of last executed "
"command at offset: %x\n",
rb_rptr / sizeof(unsigned int));
break;
} else {
if (rb_rptr < (3 * sizeof(unsigned int)))
rb_rptr = rb->buffer_desc.size -
(3 * sizeof(unsigned int))
+ rb_rptr;
else
rb_rptr -= (3 * sizeof(unsigned int));
}
}
if (rb_rptr == 0)
rb_rptr = rb->buffer_desc.size - sizeof(unsigned int);
else
rb_rptr -= sizeof(unsigned int);
}
if ((rb_rptr / sizeof(unsigned int)) == rb->wptr) {
KGSL_DRV_ERR(device,
"GPU recovery from hang not possible because last"
" successful timestamp is overwritten\n");
return -EINVAL;
}
/* rb_rptr is now pointing to the first dword of the command following
* the last sucessfully executed command sequence. Assumption is that
* GPU is hung in the command sequence pointed by rb_rptr */
/* make sure the GPU is not hung in a command submitted by kgsl
* itself */
kgsl_sharedmem_readl(&rb->buffer_desc, &val1, rb_rptr);
kgsl_sharedmem_readl(&rb->buffer_desc, &val2,
adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size));
if (val1 == cp_nop_packet(1) && val2 == KGSL_CMD_IDENTIFIER) {
KGSL_DRV_ERR(device,
"GPU recovery from hang not possible because "
"of hang in kgsl command\n");
return -EINVAL;
}
/* current_context is the context that is presently active in the
* GPU, i.e the context in which the hang is caused */
kgsl_sharedmem_readl(&device->memstore, &cur_context,
KGSL_DEVICE_MEMSTORE_OFFSET(current_context));
while ((rb_rptr / sizeof(unsigned int)) != rb->wptr) {
kgsl_sharedmem_readl(&rb->buffer_desc, &value, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
/* check for context switch indicator */
if (value == KGSL_CONTEXT_TO_MEM_IDENTIFIER) {
kgsl_sharedmem_readl(&rb->buffer_desc, &value, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
BUG_ON(value != cp_type3_packet(CP_MEM_WRITE, 2));
kgsl_sharedmem_readl(&rb->buffer_desc, &val1, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
BUG_ON(val1 != (device->memstore.gpuaddr +
KGSL_DEVICE_MEMSTORE_OFFSET(current_context)));
kgsl_sharedmem_readl(&rb->buffer_desc, &value, rb_rptr);
rb_rptr = adreno_ringbuffer_inc_wrapped(rb_rptr,
rb->buffer_desc.size);
BUG_ON((copy_rb_contents == 0) &&
(value == cur_context));
/*
* If we were copying the commands and got to this point
* then we need to remove the 3 commands that appear
* before KGSL_CONTEXT_TO_MEM_IDENTIFIER
*/
if (temp_idx)
temp_idx -= 3;
/* if context switches to a context that did not cause
* hang then start saving the rb contents as those
* commands can be executed */
if (value != cur_context) {
copy_rb_contents = 1;
temp_rb_buffer[temp_idx++] = cp_nop_packet(1);
temp_rb_buffer[temp_idx++] =
KGSL_CMD_IDENTIFIER;
temp_rb_buffer[temp_idx++] = cp_nop_packet(1);
temp_rb_buffer[temp_idx++] =
KGSL_CONTEXT_TO_MEM_IDENTIFIER;
temp_rb_buffer[temp_idx++] =
cp_type3_packet(CP_MEM_WRITE, 2);
temp_rb_buffer[temp_idx++] = val1;
temp_rb_buffer[temp_idx++] = value;
} else {
copy_rb_contents = 0;
}
} else if (copy_rb_contents)
temp_rb_buffer[temp_idx++] = value;
}
*rb_size = temp_idx;
KGSL_DRV_ERR(device, "Extracted rb contents, size: %x\n", *rb_size);
for (temp_idx = 0; temp_idx < *rb_size;) {
char str[80];
int idx = 0;
if ((temp_idx + 8) <= *rb_size)
j = 8;
else
j = *rb_size - temp_idx;
for (; j != 0; j--)
idx += scnprintf(str + idx, 80 - idx,
"%8.8X ", temp_rb_buffer[temp_idx++]);
printk(KERN_ALERT "%s", str);
}
return 0;
}
void
adreno_ringbuffer_restore(struct adreno_ringbuffer *rb, unsigned int *rb_buff,
int num_rb_contents)
{
int i;
unsigned int *ringcmds;
unsigned int rcmd_gpu;
if (!num_rb_contents)
return;
if (num_rb_contents > (rb->buffer_desc.size - rb->wptr)) {
adreno_regwrite(rb->device, REG_CP_RB_RPTR, 0);
rb->rptr = 0;
BUG_ON(num_rb_contents > rb->buffer_desc.size);
}
ringcmds = (unsigned int *)rb->buffer_desc.hostptr + rb->wptr;
rcmd_gpu = rb->buffer_desc.gpuaddr + sizeof(unsigned int) * rb->wptr;
for (i = 0; i < num_rb_contents; i++)
GSL_RB_WRITE(ringcmds, rcmd_gpu, rb_buff[i]);
rb->wptr += num_rb_contents;
adreno_ringbuffer_submit(rb);
}

View File

@ -1,154 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __ADRENO_RINGBUFFER_H
#define __ADRENO_RINGBUFFER_H
#define GSL_RB_USE_MEM_RPTR
#define GSL_RB_USE_MEM_TIMESTAMP
#define GSL_DEVICE_SHADOW_MEMSTORE_TO_USER
/*
* Adreno ringbuffer sizes in bytes - these are converted to
* the appropriate log2 values in the code
*/
#define KGSL_RB_SIZE (32 * 1024)
#define KGSL_RB_BLKSIZE 16
/* CP timestamp register */
#define REG_CP_TIMESTAMP REG_SCRATCH_REG0
struct kgsl_device;
struct kgsl_device_private;
#define GSL_RB_MEMPTRS_SCRATCH_COUNT 8
struct kgsl_rbmemptrs {
int rptr;
int wptr_poll;
};
#define GSL_RB_MEMPTRS_RPTR_OFFSET \
(offsetof(struct kgsl_rbmemptrs, rptr))
#define GSL_RB_MEMPTRS_WPTRPOLL_OFFSET \
(offsetof(struct kgsl_rbmemptrs, wptr_poll))
struct adreno_ringbuffer {
struct kgsl_device *device;
uint32_t flags;
struct kgsl_memdesc buffer_desc;
struct kgsl_memdesc memptrs_desc;
struct kgsl_rbmemptrs *memptrs;
/*ringbuffer size */
unsigned int sizedwords;
unsigned int wptr; /* write pointer offset in dwords from baseaddr */
unsigned int rptr; /* read pointer offset in dwords from baseaddr */
uint32_t timestamp;
};
#define GSL_RB_WRITE(ring, gpuaddr, data) \
do { \
writel_relaxed(data, ring); \
wmb(); \
kgsl_cffdump_setmem(gpuaddr, data, 4); \
ring++; \
gpuaddr += sizeof(uint); \
} while (0)
/* timestamp */
#ifdef GSL_DEVICE_SHADOW_MEMSTORE_TO_USER
#define GSL_RB_USE_MEM_TIMESTAMP
#endif /* GSL_DEVICE_SHADOW_MEMSTORE_TO_USER */
#ifdef GSL_RB_USE_MEM_TIMESTAMP
/* enable timestamp (...scratch0) memory shadowing */
#define GSL_RB_MEMPTRS_SCRATCH_MASK 0x1
#define GSL_RB_INIT_TIMESTAMP(rb)
#else
#define GSL_RB_MEMPTRS_SCRATCH_MASK 0x0
#define GSL_RB_INIT_TIMESTAMP(rb) \
adreno_regwrite((rb)->device->id, REG_CP_TIMESTAMP, 0)
#endif /* GSL_RB_USE_MEMTIMESTAMP */
/* mem rptr */
#ifdef GSL_RB_USE_MEM_RPTR
#define GSL_RB_CNTL_NO_UPDATE 0x0 /* enable */
#define GSL_RB_GET_READPTR(rb, data) \
do { \
*(data) = readl_relaxed(&(rb)->memptrs->rptr); \
} while (0)
#else
#define GSL_RB_CNTL_NO_UPDATE 0x1 /* disable */
#define GSL_RB_GET_READPTR(rb, data) \
do { \
adreno_regread((rb)->device->id, REG_CP_RB_RPTR, (data)); \
} while (0)
#endif /* GSL_RB_USE_MEMRPTR */
#define GSL_RB_CNTL_POLL_EN 0x0 /* disable */
int adreno_ringbuffer_issueibcmds(struct kgsl_device_private *dev_priv,
struct kgsl_context *context,
struct kgsl_ibdesc *ibdesc,
unsigned int numibs,
uint32_t *timestamp,
unsigned int flags);
int adreno_ringbuffer_init(struct kgsl_device *device);
int adreno_ringbuffer_start(struct adreno_ringbuffer *rb,
unsigned int init_ram);
void adreno_ringbuffer_stop(struct adreno_ringbuffer *rb);
void adreno_ringbuffer_close(struct adreno_ringbuffer *rb);
void adreno_ringbuffer_issuecmds(struct kgsl_device *device,
unsigned int flags,
unsigned int *cmdaddr,
int sizedwords);
void kgsl_cp_intrcallback(struct kgsl_device *device);
int adreno_ringbuffer_extract(struct adreno_ringbuffer *rb,
unsigned int *temp_rb_buffer,
int *rb_size);
void
adreno_ringbuffer_restore(struct adreno_ringbuffer *rb, unsigned int *rb_buff,
int num_rb_contents);
static inline int adreno_ringbuffer_count(struct adreno_ringbuffer *rb,
unsigned int rptr)
{
if (rb->wptr >= rptr)
return rb->wptr - rptr;
return rb->wptr + rb->sizedwords - rptr;
}
/* Increment a value by 4 bytes with wrap-around based on size */
static inline unsigned int adreno_ringbuffer_inc_wrapped(unsigned int val,
unsigned int size)
{
return (val + sizeof(unsigned int)) % size;
}
#endif /* __ADRENO_RINGBUFFER_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,203 +0,0 @@
/* Copyright (c) 2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_H
#define __KGSL_H
#include <linux/types.h>
#include <linux/msm_kgsl.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/interrupt.h>
#include <linux/mutex.h>
#include <linux/cdev.h>
#include <linux/regulator/consumer.h>
#define KGSL_NAME "kgsl"
/*cache coherency ops */
#define DRM_KGSL_GEM_CACHE_OP_TO_DEV 0x0001
#define DRM_KGSL_GEM_CACHE_OP_FROM_DEV 0x0002
/* The size of each entry in a page table */
#define KGSL_PAGETABLE_ENTRY_SIZE 4
/* Pagetable Virtual Address base */
#define KGSL_PAGETABLE_BASE 0x66000000
/* Extra accounting entries needed in the pagetable */
#define KGSL_PT_EXTRA_ENTRIES 16
#define KGSL_PAGETABLE_ENTRIES(_sz) (((_sz) >> PAGE_SHIFT) + \
KGSL_PT_EXTRA_ENTRIES)
#define KGSL_PAGETABLE_SIZE \
ALIGN(KGSL_PAGETABLE_ENTRIES(CONFIG_MSM_KGSL_PAGE_TABLE_SIZE) * \
KGSL_PAGETABLE_ENTRY_SIZE, PAGE_SIZE)
#ifdef CONFIG_KGSL_PER_PROCESS_PAGE_TABLE
#define KGSL_PAGETABLE_COUNT (CONFIG_MSM_KGSL_PAGE_TABLE_COUNT)
#else
#define KGSL_PAGETABLE_COUNT 1
#endif
/* Casting using container_of() for structures that kgsl owns. */
#define KGSL_CONTAINER_OF(ptr, type, member) \
container_of(ptr, type, member)
/* A macro for memory statistics - add the new size to the stat and if
the statisic is greater then _max, set _max
*/
#define KGSL_STATS_ADD(_size, _stat, _max) \
do { _stat += (_size); if (_stat > _max) _max = _stat; } while (0)
struct kgsl_device;
struct kgsl_driver {
struct cdev cdev;
dev_t major;
struct class *class;
/* Virtual device for managing the core */
struct device virtdev;
/* Kobjects for storing pagetable and process statistics */
struct kobject *ptkobj;
struct kobject *prockobj;
struct kgsl_device *devp[KGSL_DEVICE_MAX];
/* Global lilst of open processes */
struct list_head process_list;
/* Global list of pagetables */
struct list_head pagetable_list;
/* Spinlock for accessing the pagetable list */
spinlock_t ptlock;
/* Mutex for accessing the process list */
struct mutex process_mutex;
/* Mutex for protecting the device list */
struct mutex devlock;
void *ptpool;
struct {
unsigned int vmalloc;
unsigned int vmalloc_max;
unsigned int coherent;
unsigned int coherent_max;
unsigned int mapped;
unsigned int mapped_max;
unsigned int histogram[16];
} stats;
};
extern struct kgsl_driver kgsl_driver;
#define KGSL_USER_MEMORY 1
#define KGSL_MAPPED_MEMORY 2
struct kgsl_pagetable;
struct kgsl_memdesc_ops;
/* shared memory allocation */
struct kgsl_memdesc {
struct kgsl_pagetable *pagetable;
void *hostptr;
unsigned int gpuaddr;
unsigned int physaddr;
unsigned int size;
unsigned int priv;
struct scatterlist *sg;
unsigned int sglen;
struct kgsl_memdesc_ops *ops;
};
struct kgsl_mem_entry {
struct kref refcount;
struct kgsl_memdesc memdesc;
int memtype;
struct file *file_ptr;
struct list_head list;
uint32_t free_timestamp;
/* back pointer to private structure under whose context this
* allocation is made */
struct kgsl_process_private *priv;
};
#ifdef CONFIG_MSM_KGSL_MMU_PAGE_FAULT
#define MMU_CONFIG 2
#else
#define MMU_CONFIG 1
#endif
void kgsl_mem_entry_destroy(struct kref *kref);
uint8_t *kgsl_gpuaddr_to_vaddr(const struct kgsl_memdesc *memdesc,
unsigned int gpuaddr, unsigned int *size);
struct kgsl_mem_entry *kgsl_sharedmem_find_region(
struct kgsl_process_private *private, unsigned int gpuaddr,
size_t size);
extern const struct dev_pm_ops kgsl_pm_ops;
struct early_suspend;
int kgsl_suspend_driver(struct platform_device *pdev, pm_message_t state);
int kgsl_resume_driver(struct platform_device *pdev);
void kgsl_early_suspend_driver(struct early_suspend *h);
void kgsl_late_resume_driver(struct early_suspend *h);
#ifdef CONFIG_MSM_KGSL_DRM
extern int kgsl_drm_init(struct platform_device *dev);
extern void kgsl_drm_exit(void);
extern void kgsl_gpu_mem_flush(int op);
#else
static inline int kgsl_drm_init(struct platform_device *dev)
{
return 0;
}
static inline void kgsl_drm_exit(void)
{
}
#endif
static inline int kgsl_gpuaddr_in_memdesc(const struct kgsl_memdesc *memdesc,
unsigned int gpuaddr)
{
if (gpuaddr >= memdesc->gpuaddr && (gpuaddr + sizeof(unsigned int)) <=
(memdesc->gpuaddr + memdesc->size)) {
return 1;
}
return 0;
}
static inline int timestamp_cmp(unsigned int new, unsigned int old)
{
int ts_diff = new - old;
if (ts_diff == 0)
return 0;
return ((ts_diff > 0) || (ts_diff < -20000)) ? 1 : -1;
}
static inline void
kgsl_mem_entry_get(struct kgsl_mem_entry *entry)
{
kref_get(&entry->refcount);
}
static inline void
kgsl_mem_entry_put(struct kgsl_mem_entry *entry)
{
kref_put(&entry->refcount, kgsl_mem_entry_destroy);
}
#endif /* __KGSL_H */

View File

@ -1,800 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
/* #define DEBUG */
#define ALIGN_CPU
#include <linux/spinlock.h>
#include <linux/debugfs.h>
#include <linux/relay.h>
#include <linux/slab.h>
#include <linux/time.h>
#include <linux/sched.h>
#include <mach/socinfo.h>
#include "kgsl.h"
#include "kgsl_cffdump.h"
#include "kgsl_debugfs.h"
#include "kgsl_log.h"
#include "kgsl_sharedmem.h"
#include "adreno_pm4types.h"
static struct rchan *chan;
static struct dentry *dir;
static int suspended;
static size_t dropped;
static size_t subbuf_size = 256*1024;
static size_t n_subbufs = 64;
/* forward declarations */
static void destroy_channel(void);
static struct rchan *create_channel(unsigned subbuf_size, unsigned n_subbufs);
static spinlock_t cffdump_lock;
static ulong serial_nr;
static ulong total_bytes;
static ulong total_syncmem;
static long last_sec;
#define MEMBUF_SIZE 64
#define CFF_OP_WRITE_REG 0x00000002
struct cff_op_write_reg {
unsigned char op;
uint addr;
uint value;
} __packed;
#define CFF_OP_POLL_REG 0x00000004
struct cff_op_poll_reg {
unsigned char op;
uint addr;
uint value;
uint mask;
} __packed;
#define CFF_OP_WAIT_IRQ 0x00000005
struct cff_op_wait_irq {
unsigned char op;
} __packed;
#define CFF_OP_RMW 0x0000000a
#define CFF_OP_WRITE_MEM 0x0000000b
struct cff_op_write_mem {
unsigned char op;
uint addr;
uint value;
} __packed;
#define CFF_OP_WRITE_MEMBUF 0x0000000c
struct cff_op_write_membuf {
unsigned char op;
uint addr;
ushort count;
uint buffer[MEMBUF_SIZE];
} __packed;
#define CFF_OP_MEMORY_BASE 0x0000000d
struct cff_op_memory_base {
unsigned char op;
uint base;
uint size;
uint gmemsize;
} __packed;
#define CFF_OP_HANG 0x0000000e
struct cff_op_hang {
unsigned char op;
} __packed;
#define CFF_OP_EOF 0xffffffff
struct cff_op_eof {
unsigned char op;
} __packed;
#define CFF_OP_VERIFY_MEM_FILE 0x00000007
#define CFF_OP_WRITE_SURFACE_PARAMS 0x00000011
struct cff_op_user_event {
unsigned char op;
unsigned int op1;
unsigned int op2;
unsigned int op3;
unsigned int op4;
unsigned int op5;
} __packed;
static void b64_encodeblock(unsigned char in[3], unsigned char out[4], int len)
{
static const char tob64[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmno"
"pqrstuvwxyz0123456789+/";
out[0] = tob64[in[0] >> 2];
out[1] = tob64[((in[0] & 0x03) << 4) | ((in[1] & 0xf0) >> 4)];
out[2] = (unsigned char) (len > 1 ? tob64[((in[1] & 0x0f) << 2)
| ((in[2] & 0xc0) >> 6)] : '=');
out[3] = (unsigned char) (len > 2 ? tob64[in[2] & 0x3f] : '=');
}
static void b64_encode(const unsigned char *in_buf, int in_size,
unsigned char *out_buf, int out_bufsize, int *out_size)
{
unsigned char in[3], out[4];
int i, len;
*out_size = 0;
while (in_size > 0) {
len = 0;
for (i = 0; i < 3; ++i) {
if (in_size-- > 0) {
in[i] = *in_buf++;
++len;
} else
in[i] = 0;
}
if (len) {
b64_encodeblock(in, out, len);
if (out_bufsize < 4) {
pr_warn("kgsl: cffdump: %s: out of buffer\n",
__func__);
return;
}
for (i = 0; i < 4; ++i)
*out_buf++ = out[i];
*out_size += 4;
out_bufsize -= 4;
}
}
}
#define KLOG_TMPBUF_SIZE (1024)
static void klog_printk(const char *fmt, ...)
{
/* per-cpu klog formatting temporary buffer */
static char klog_buf[NR_CPUS][KLOG_TMPBUF_SIZE];
va_list args;
int len;
char *cbuf;
unsigned long flags;
local_irq_save(flags);
cbuf = klog_buf[smp_processor_id()];
va_start(args, fmt);
len = vsnprintf(cbuf, KLOG_TMPBUF_SIZE, fmt, args);
total_bytes += len;
va_end(args);
relay_write(chan, cbuf, len);
local_irq_restore(flags);
}
static struct cff_op_write_membuf cff_op_write_membuf;
static void cffdump_membuf(int id, unsigned char *out_buf, int out_bufsize)
{
void *data;
int len, out_size;
struct cff_op_write_mem cff_op_write_mem;
uint addr = cff_op_write_membuf.addr
- sizeof(uint)*cff_op_write_membuf.count;
if (!cff_op_write_membuf.count) {
pr_warn("kgsl: cffdump: membuf: count == 0, skipping");
return;
}
if (cff_op_write_membuf.count != 1) {
cff_op_write_membuf.op = CFF_OP_WRITE_MEMBUF;
cff_op_write_membuf.addr = addr;
len = sizeof(cff_op_write_membuf) -
sizeof(uint)*(MEMBUF_SIZE - cff_op_write_membuf.count);
data = &cff_op_write_membuf;
} else {
cff_op_write_mem.op = CFF_OP_WRITE_MEM;
cff_op_write_mem.addr = addr;
cff_op_write_mem.value = cff_op_write_membuf.buffer[0];
data = &cff_op_write_mem;
len = sizeof(cff_op_write_mem);
}
b64_encode(data, len, out_buf, out_bufsize, &out_size);
out_buf[out_size] = 0;
klog_printk("%ld:%d;%s\n", ++serial_nr, id, out_buf);
cff_op_write_membuf.count = 0;
cff_op_write_membuf.addr = 0;
}
static void cffdump_printline(int id, uint opcode, uint op1, uint op2,
uint op3, uint op4, uint op5)
{
struct cff_op_write_reg cff_op_write_reg;
struct cff_op_poll_reg cff_op_poll_reg;
struct cff_op_wait_irq cff_op_wait_irq;
struct cff_op_memory_base cff_op_memory_base;
struct cff_op_hang cff_op_hang;
struct cff_op_eof cff_op_eof;
struct cff_op_user_event cff_op_user_event;
unsigned char out_buf[sizeof(cff_op_write_membuf)/3*4 + 16];
void *data;
int len = 0, out_size;
long cur_secs;
spin_lock(&cffdump_lock);
if (opcode == CFF_OP_WRITE_MEM) {
if (op1 < 0x40000000 || op1 >= 0x60000000)
KGSL_CORE_ERR("addr out-of-range: op1=%08x", op1);
if ((cff_op_write_membuf.addr != op1 &&
cff_op_write_membuf.count)
|| (cff_op_write_membuf.count == MEMBUF_SIZE))
cffdump_membuf(id, out_buf, sizeof(out_buf));
cff_op_write_membuf.buffer[cff_op_write_membuf.count++] = op2;
cff_op_write_membuf.addr = op1 + sizeof(uint);
spin_unlock(&cffdump_lock);
return;
} else if (cff_op_write_membuf.count)
cffdump_membuf(id, out_buf, sizeof(out_buf));
spin_unlock(&cffdump_lock);
switch (opcode) {
case CFF_OP_WRITE_REG:
cff_op_write_reg.op = opcode;
cff_op_write_reg.addr = op1;
cff_op_write_reg.value = op2;
data = &cff_op_write_reg;
len = sizeof(cff_op_write_reg);
break;
case CFF_OP_POLL_REG:
cff_op_poll_reg.op = opcode;
cff_op_poll_reg.addr = op1;
cff_op_poll_reg.value = op2;
cff_op_poll_reg.mask = op3;
data = &cff_op_poll_reg;
len = sizeof(cff_op_poll_reg);
break;
case CFF_OP_WAIT_IRQ:
cff_op_wait_irq.op = opcode;
data = &cff_op_wait_irq;
len = sizeof(cff_op_wait_irq);
break;
case CFF_OP_MEMORY_BASE:
cff_op_memory_base.op = opcode;
cff_op_memory_base.base = op1;
cff_op_memory_base.size = op2;
cff_op_memory_base.gmemsize = op3;
data = &cff_op_memory_base;
len = sizeof(cff_op_memory_base);
break;
case CFF_OP_HANG:
cff_op_hang.op = opcode;
data = &cff_op_hang;
len = sizeof(cff_op_hang);
break;
case CFF_OP_EOF:
cff_op_eof.op = opcode;
data = &cff_op_eof;
len = sizeof(cff_op_eof);
break;
case CFF_OP_WRITE_SURFACE_PARAMS:
case CFF_OP_VERIFY_MEM_FILE:
cff_op_user_event.op = opcode;
cff_op_user_event.op1 = op1;
cff_op_user_event.op2 = op2;
cff_op_user_event.op3 = op3;
cff_op_user_event.op4 = op4;
cff_op_user_event.op5 = op5;
data = &cff_op_user_event;
len = sizeof(cff_op_user_event);
break;
}
if (len) {
b64_encode(data, len, out_buf, sizeof(out_buf), &out_size);
out_buf[out_size] = 0;
klog_printk("%ld:%d;%s\n", ++serial_nr, id, out_buf);
} else
pr_warn("kgsl: cffdump: unhandled opcode: %d\n", opcode);
cur_secs = get_seconds();
if ((cur_secs - last_sec) > 10 || (last_sec - cur_secs) > 10) {
pr_info("kgsl: cffdump: total [bytes:%lu kB, syncmem:%lu kB], "
"seq#: %lu\n", total_bytes/1024, total_syncmem/1024,
serial_nr);
last_sec = cur_secs;
}
}
void kgsl_cffdump_init()
{
struct dentry *debugfs_dir = kgsl_get_debugfs_dir();
#ifdef ALIGN_CPU
cpumask_t mask;
cpumask_clear(&mask);
cpumask_set_cpu(0, &mask);
sched_setaffinity(0, &mask);
#endif
if (!debugfs_dir || IS_ERR(debugfs_dir)) {
KGSL_CORE_ERR("Debugfs directory is bad\n");
return;
}
kgsl_cff_dump_enable = 1;
spin_lock_init(&cffdump_lock);
dir = debugfs_create_dir("cff", debugfs_dir);
if (!dir) {
KGSL_CORE_ERR("debugfs_create_dir failed\n");
return;
}
chan = create_channel(subbuf_size, n_subbufs);
}
void kgsl_cffdump_destroy()
{
if (chan)
relay_flush(chan);
destroy_channel();
if (dir)
debugfs_remove(dir);
}
void kgsl_cffdump_open(enum kgsl_deviceid device_id)
{
/*TODO: move this to where we can report correct gmemsize*/
unsigned int va_base;
if (cpu_is_msm8x60() || cpu_is_msm8960() || cpu_is_msm8930())
va_base = 0x40000000;
else
va_base = 0x20000000;
kgsl_cffdump_memory_base(device_id, va_base,
CONFIG_MSM_KGSL_PAGE_TABLE_SIZE, SZ_256K);
}
void kgsl_cffdump_memory_base(enum kgsl_deviceid device_id, unsigned int base,
unsigned int range, unsigned gmemsize)
{
cffdump_printline(device_id, CFF_OP_MEMORY_BASE, base,
range, gmemsize, 0, 0);
}
void kgsl_cffdump_hang(enum kgsl_deviceid device_id)
{
cffdump_printline(device_id, CFF_OP_HANG, 0, 0, 0, 0, 0);
}
void kgsl_cffdump_close(enum kgsl_deviceid device_id)
{
cffdump_printline(device_id, CFF_OP_EOF, 0, 0, 0, 0, 0);
}
void kgsl_cffdump_user_event(unsigned int cff_opcode, unsigned int op1,
unsigned int op2, unsigned int op3,
unsigned int op4, unsigned int op5)
{
cffdump_printline(-1, cff_opcode, op1, op2, op3, op4, op5);
}
void kgsl_cffdump_syncmem(struct kgsl_device_private *dev_priv,
const struct kgsl_memdesc *memdesc, uint gpuaddr, uint sizebytes,
bool clean_cache)
{
const void *src;
uint host_size;
uint physaddr;
if (!kgsl_cff_dump_enable)
return;
total_syncmem += sizebytes;
if (memdesc == NULL) {
struct kgsl_mem_entry *entry;
spin_lock(&dev_priv->process_priv->mem_lock);
entry = kgsl_sharedmem_find_region(dev_priv->process_priv,
gpuaddr, sizebytes);
spin_unlock(&dev_priv->process_priv->mem_lock);
if (entry == NULL) {
KGSL_CORE_ERR("did not find mapping "
"for gpuaddr: 0x%08x\n", gpuaddr);
return;
}
memdesc = &entry->memdesc;
}
BUG_ON(memdesc->gpuaddr == 0);
BUG_ON(gpuaddr == 0);
physaddr = kgsl_get_realaddr(memdesc) + (gpuaddr - memdesc->gpuaddr);
src = kgsl_gpuaddr_to_vaddr(memdesc, gpuaddr, &host_size);
if (src == NULL || host_size < sizebytes) {
KGSL_CORE_ERR("did not find mapping for "
"gpuaddr: 0x%08x, m->host: 0x%p, phys: 0x%08x\n",
gpuaddr, memdesc->hostptr, memdesc->physaddr);
return;
}
if (clean_cache) {
/* Ensure that this memory region is not read from the
* cache but fetched fresh */
mb();
kgsl_cache_range_op((struct kgsl_memdesc *)memdesc,
KGSL_CACHE_OP_INV);
}
BUG_ON(physaddr > 0x66000000 && physaddr < 0x66ffffff);
while (sizebytes > 3) {
cffdump_printline(-1, CFF_OP_WRITE_MEM, gpuaddr, *(uint *)src,
0, 0, 0);
gpuaddr += 4;
src += 4;
sizebytes -= 4;
}
if (sizebytes > 0)
cffdump_printline(-1, CFF_OP_WRITE_MEM, gpuaddr, *(uint *)src,
0, 0, 0);
}
void kgsl_cffdump_setmem(uint addr, uint value, uint sizebytes)
{
if (!kgsl_cff_dump_enable)
return;
BUG_ON(addr > 0x66000000 && addr < 0x66ffffff);
while (sizebytes > 3) {
/* Use 32bit memory writes as long as there's at least
* 4 bytes left */
cffdump_printline(-1, CFF_OP_WRITE_MEM, addr, value,
0, 0, 0);
addr += 4;
sizebytes -= 4;
}
if (sizebytes > 0)
cffdump_printline(-1, CFF_OP_WRITE_MEM, addr, value,
0, 0, 0);
}
void kgsl_cffdump_regwrite(enum kgsl_deviceid device_id, uint addr,
uint value)
{
if (!kgsl_cff_dump_enable)
return;
cffdump_printline(device_id, CFF_OP_WRITE_REG, addr, value,
0, 0, 0);
}
void kgsl_cffdump_regpoll(enum kgsl_deviceid device_id, uint addr,
uint value, uint mask)
{
if (!kgsl_cff_dump_enable)
return;
cffdump_printline(device_id, CFF_OP_POLL_REG, addr, value,
mask, 0, 0);
}
void kgsl_cffdump_slavewrite(uint addr, uint value)
{
if (!kgsl_cff_dump_enable)
return;
cffdump_printline(-1, CFF_OP_WRITE_REG, addr, value, 0, 0, 0);
}
int kgsl_cffdump_waitirq(void)
{
if (!kgsl_cff_dump_enable)
return 0;
cffdump_printline(-1, CFF_OP_WAIT_IRQ, 0, 0, 0, 0, 0);
return 1;
}
EXPORT_SYMBOL(kgsl_cffdump_waitirq);
#define ADDRESS_STACK_SIZE 256
#define GET_PM4_TYPE3_OPCODE(x) ((*(x) >> 8) & 0xFF)
static unsigned int kgsl_cffdump_addr_count;
static bool kgsl_cffdump_handle_type3(struct kgsl_device_private *dev_priv,
uint *hostaddr, bool check_only)
{
static uint addr_stack[ADDRESS_STACK_SIZE];
static uint size_stack[ADDRESS_STACK_SIZE];
switch (GET_PM4_TYPE3_OPCODE(hostaddr)) {
case CP_INDIRECT_BUFFER_PFD:
case CP_INDIRECT_BUFFER:
{
/* traverse indirect buffers */
int i;
uint ibaddr = hostaddr[1];
uint ibsize = hostaddr[2];
/* is this address already in encountered? */
for (i = 0;
i < kgsl_cffdump_addr_count && addr_stack[i] != ibaddr;
++i)
;
if (kgsl_cffdump_addr_count == i) {
addr_stack[kgsl_cffdump_addr_count] = ibaddr;
size_stack[kgsl_cffdump_addr_count++] = ibsize;
if (kgsl_cffdump_addr_count >= ADDRESS_STACK_SIZE) {
KGSL_CORE_ERR("stack overflow\n");
return false;
}
return kgsl_cffdump_parse_ibs(dev_priv, NULL,
ibaddr, ibsize, check_only);
} else if (size_stack[i] != ibsize) {
KGSL_CORE_ERR("gpuaddr: 0x%08x, "
"wc: %u, with size wc: %u already on the "
"stack\n", ibaddr, ibsize, size_stack[i]);
return false;
}
}
break;
}
return true;
}
/*
* Traverse IBs and dump them to test vector. Detect swap by inspecting
* register writes, keeping note of the current state, and dump
* framebuffer config to test vector
*/
bool kgsl_cffdump_parse_ibs(struct kgsl_device_private *dev_priv,
const struct kgsl_memdesc *memdesc, uint gpuaddr, int sizedwords,
bool check_only)
{
static uint level; /* recursion level */
bool ret = true;
uint host_size;
uint *hostaddr, *hoststart;
int dwords_left = sizedwords; /* dwords left in the current command
buffer */
if (level == 0)
kgsl_cffdump_addr_count = 0;
if (memdesc == NULL) {
struct kgsl_mem_entry *entry;
spin_lock(&dev_priv->process_priv->mem_lock);
entry = kgsl_sharedmem_find_region(dev_priv->process_priv,
gpuaddr, sizedwords * sizeof(uint));
spin_unlock(&dev_priv->process_priv->mem_lock);
if (entry == NULL) {
KGSL_CORE_ERR("did not find mapping "
"for gpuaddr: 0x%08x\n", gpuaddr);
return true;
}
memdesc = &entry->memdesc;
}
hostaddr = (uint *)kgsl_gpuaddr_to_vaddr(memdesc, gpuaddr, &host_size);
if (hostaddr == NULL) {
KGSL_CORE_ERR("did not find mapping for "
"gpuaddr: 0x%08x\n", gpuaddr);
return true;
}
hoststart = hostaddr;
level++;
if (!memdesc->physaddr) {
KGSL_CORE_ERR("no physaddr");
} else {
mb();
kgsl_cache_range_op((struct kgsl_memdesc *)memdesc,
KGSL_CACHE_OP_INV);
}
#ifdef DEBUG
pr_info("kgsl: cffdump: ib: gpuaddr:0x%08x, wc:%d, hptr:%p\n",
gpuaddr, sizedwords, hostaddr);
#endif
while (dwords_left > 0) {
int count = 0; /* dword count including packet header */
bool cur_ret = true;
switch (*hostaddr >> 30) {
case 0x0: /* type-0 */
count = (*hostaddr >> 16)+2;
break;
case 0x1: /* type-1 */
count = 2;
break;
case 0x3: /* type-3 */
count = ((*hostaddr >> 16) & 0x3fff) + 2;
cur_ret = kgsl_cffdump_handle_type3(dev_priv,
hostaddr, check_only);
break;
default:
pr_warn("kgsl: cffdump: parse-ib: unexpected type: "
"type:%d, word:0x%08x @ 0x%p, gpu:0x%08x\n",
*hostaddr >> 30, *hostaddr, hostaddr,
gpuaddr+4*(sizedwords-dwords_left));
cur_ret = false;
count = dwords_left;
break;
}
#ifdef DEBUG
if (!cur_ret) {
pr_info("kgsl: cffdump: bad sub-type: #:%d/%d, v:0x%08x"
" @ 0x%p[gb:0x%08x], level:%d\n",
sizedwords-dwords_left, sizedwords, *hostaddr,
hostaddr, gpuaddr+4*(sizedwords-dwords_left),
level);
print_hex_dump(KERN_ERR, level == 1 ? "IB1:" : "IB2:",
DUMP_PREFIX_OFFSET, 32, 4, hoststart,
sizedwords*4, 0);
}
#endif
ret = ret && cur_ret;
/* jump to next packet */
dwords_left -= count;
hostaddr += count;
cur_ret = dwords_left >= 0;
#ifdef DEBUG
if (!cur_ret) {
pr_info("kgsl: cffdump: bad count: c:%d, #:%d/%d, "
"v:0x%08x @ 0x%p[gb:0x%08x], level:%d\n",
count, sizedwords-(dwords_left+count),
sizedwords, *(hostaddr-count), hostaddr-count,
gpuaddr+4*(sizedwords-(dwords_left+count)),
level);
print_hex_dump(KERN_ERR, level == 1 ? "IB1:" : "IB2:",
DUMP_PREFIX_OFFSET, 32, 4, hoststart,
sizedwords*4, 0);
}
#endif
ret = ret && cur_ret;
}
if (!ret)
pr_info("kgsl: cffdump: parsing failed: gpuaddr:0x%08x, "
"host:0x%p, wc:%d\n", gpuaddr, hoststart, sizedwords);
if (!check_only) {
#ifdef DEBUG
uint offset = gpuaddr - memdesc->gpuaddr;
pr_info("kgsl: cffdump: ib-dump: hostptr:%p, gpuaddr:%08x, "
"physaddr:%08x, offset:%d, size:%d", hoststart,
gpuaddr, memdesc->physaddr + offset, offset,
sizedwords*4);
#endif
kgsl_cffdump_syncmem(dev_priv, memdesc, gpuaddr, sizedwords*4,
false);
}
level--;
return ret;
}
static int subbuf_start_handler(struct rchan_buf *buf,
void *subbuf, void *prev_subbuf, uint prev_padding)
{
pr_debug("kgsl: cffdump: subbuf_start_handler(subbuf=%p, prev_subbuf"
"=%p, prev_padding=%08x)\n", subbuf, prev_subbuf, prev_padding);
if (relay_buf_full(buf)) {
if (!suspended) {
suspended = 1;
pr_warn("kgsl: cffdump: relay: cpu %d buffer full!!!\n",
smp_processor_id());
}
dropped++;
return 0;
} else if (suspended) {
suspended = 0;
pr_warn("kgsl: cffdump: relay: cpu %d buffer no longer full.\n",
smp_processor_id());
}
subbuf_start_reserve(buf, 0);
return 1;
}
static struct dentry *create_buf_file_handler(const char *filename,
struct dentry *parent, int mode, struct rchan_buf *buf,
int *is_global)
{
return debugfs_create_file(filename, mode, parent, buf,
&relay_file_operations);
}
/*
* file_remove() default callback. Removes relay file in debugfs.
*/
static int remove_buf_file_handler(struct dentry *dentry)
{
pr_info("kgsl: cffdump: %s()\n", __func__);
debugfs_remove(dentry);
return 0;
}
/*
* relay callbacks
*/
static struct rchan_callbacks relay_callbacks = {
.subbuf_start = subbuf_start_handler,
.create_buf_file = create_buf_file_handler,
.remove_buf_file = remove_buf_file_handler,
};
/**
* create_channel - creates channel /debug/klog/cpuXXX
*
* Creates channel along with associated produced/consumed control files
*
* Returns channel on success, NULL otherwise
*/
static struct rchan *create_channel(unsigned subbuf_size, unsigned n_subbufs)
{
struct rchan *chan;
pr_info("kgsl: cffdump: relay: create_channel: subbuf_size %u, "
"n_subbufs %u, dir 0x%p\n", subbuf_size, n_subbufs, dir);
chan = relay_open("cpu", dir, subbuf_size,
n_subbufs, &relay_callbacks, NULL);
if (!chan) {
KGSL_CORE_ERR("relay_open failed\n");
return NULL;
}
suspended = 0;
dropped = 0;
return chan;
}
/**
* destroy_channel - destroys channel /debug/kgsl/cff/cpuXXX
*
* Destroys channel along with associated produced/consumed control files
*/
static void destroy_channel(void)
{
pr_info("kgsl: cffdump: relay: destroy_channel\n");
if (chan) {
relay_close(chan);
chan = NULL;
}
}

View File

@ -1,69 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_CFFDUMP_H
#define __KGSL_CFFDUMP_H
#ifdef CONFIG_MSM_KGSL_CFF_DUMP
#include <linux/types.h>
#include "kgsl_device.h"
void kgsl_cffdump_init(void);
void kgsl_cffdump_destroy(void);
void kgsl_cffdump_open(enum kgsl_deviceid device_id);
void kgsl_cffdump_close(enum kgsl_deviceid device_id);
void kgsl_cffdump_syncmem(struct kgsl_device_private *dev_priv,
const struct kgsl_memdesc *memdesc, uint physaddr, uint sizebytes,
bool clean_cache);
void kgsl_cffdump_setmem(uint addr, uint value, uint sizebytes);
void kgsl_cffdump_regwrite(enum kgsl_deviceid device_id, uint addr,
uint value);
void kgsl_cffdump_regpoll(enum kgsl_deviceid device_id, uint addr,
uint value, uint mask);
bool kgsl_cffdump_parse_ibs(struct kgsl_device_private *dev_priv,
const struct kgsl_memdesc *memdesc, uint gpuaddr, int sizedwords,
bool check_only);
void kgsl_cffdump_user_event(unsigned int cff_opcode, unsigned int op1,
unsigned int op2, unsigned int op3,
unsigned int op4, unsigned int op5);
static inline bool kgsl_cffdump_flags_no_memzero(void) { return true; }
void kgsl_cffdump_memory_base(enum kgsl_deviceid device_id, unsigned int base,
unsigned int range, unsigned int gmemsize);
void kgsl_cffdump_hang(enum kgsl_deviceid device_id);
#else
#define kgsl_cffdump_init() (void)0
#define kgsl_cffdump_destroy() (void)0
#define kgsl_cffdump_open(device_id) (void)0
#define kgsl_cffdump_close(device_id) (void)0
#define kgsl_cffdump_syncmem(dev_priv, memdesc, addr, sizebytes, clean_cache) \
(void) 0
#define kgsl_cffdump_setmem(addr, value, sizebytes) (void)0
#define kgsl_cffdump_regwrite(device_id, addr, value) (void)0
#define kgsl_cffdump_regpoll(device_id, addr, value, mask) (void)0
#define kgsl_cffdump_parse_ibs(dev_priv, memdesc, gpuaddr, \
sizedwords, check_only) true
#define kgsl_cffdump_flags_no_memzero() true
#define kgsl_cffdump_memory_base(base, range, gmemsize) (void)0
#define kgsl_cffdump_hang(device_id) (void)0
#define kgsl_cffdump_user_event(cff_opcode, op1, op2, op3, op4, op5) \
(void)param
#endif /* CONFIG_MSM_KGSL_CFF_DUMP */
#endif /* __KGSL_CFFDUMP_H */

View File

@ -1,87 +0,0 @@
/* Copyright (c) 2002,2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/debugfs.h>
#include "kgsl.h"
#include "kgsl_device.h"
/*default log levels is error for everything*/
#define KGSL_LOG_LEVEL_DEFAULT 3
#define KGSL_LOG_LEVEL_MAX 7
struct dentry *kgsl_debugfs_dir;
static inline int kgsl_log_set(unsigned int *log_val, void *data, u64 val)
{
*log_val = min((unsigned int)val, (unsigned int)KGSL_LOG_LEVEL_MAX);
return 0;
}
#define KGSL_DEBUGFS_LOG(__log) \
static int __log ## _set(void *data, u64 val) \
{ \
struct kgsl_device *device = data; \
return kgsl_log_set(&device->__log, data, val); \
} \
static int __log ## _get(void *data, u64 *val) \
{ \
struct kgsl_device *device = data; \
*val = device->__log; \
return 0; \
} \
DEFINE_SIMPLE_ATTRIBUTE(__log ## _fops, \
__log ## _get, __log ## _set, "%llu\n"); \
KGSL_DEBUGFS_LOG(drv_log);
KGSL_DEBUGFS_LOG(cmd_log);
KGSL_DEBUGFS_LOG(ctxt_log);
KGSL_DEBUGFS_LOG(mem_log);
KGSL_DEBUGFS_LOG(pwr_log);
void kgsl_device_debugfs_init(struct kgsl_device *device)
{
if (kgsl_debugfs_dir && !IS_ERR(kgsl_debugfs_dir))
device->d_debugfs = debugfs_create_dir(device->name,
kgsl_debugfs_dir);
if (!device->d_debugfs || IS_ERR(device->d_debugfs))
return;
device->cmd_log = KGSL_LOG_LEVEL_DEFAULT;
device->ctxt_log = KGSL_LOG_LEVEL_DEFAULT;
device->drv_log = KGSL_LOG_LEVEL_DEFAULT;
device->mem_log = KGSL_LOG_LEVEL_DEFAULT;
device->pwr_log = KGSL_LOG_LEVEL_DEFAULT;
debugfs_create_file("log_level_cmd", 0644, device->d_debugfs, device,
&cmd_log_fops);
debugfs_create_file("log_level_ctxt", 0644, device->d_debugfs, device,
&ctxt_log_fops);
debugfs_create_file("log_level_drv", 0644, device->d_debugfs, device,
&drv_log_fops);
debugfs_create_file("log_level_mem", 0644, device->d_debugfs, device,
&mem_log_fops);
debugfs_create_file("log_level_pwr", 0644, device->d_debugfs, device,
&pwr_log_fops);
}
void kgsl_core_debugfs_init(void)
{
kgsl_debugfs_dir = debugfs_create_dir("kgsl", 0);
}
void kgsl_core_debugfs_close(void)
{
debugfs_remove_recursive(kgsl_debugfs_dir);
}

View File

@ -1,39 +0,0 @@
/* Copyright (c) 2002,2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _KGSL_DEBUGFS_H
#define _KGSL_DEBUGFS_H
struct kgsl_device;
#ifdef CONFIG_DEBUG_FS
void kgsl_core_debugfs_init(void);
void kgsl_core_debugfs_close(void);
void kgsl_device_debugfs_init(struct kgsl_device *device);
extern struct dentry *kgsl_debugfs_dir;
static inline struct dentry *kgsl_get_debugfs_dir(void)
{
return kgsl_debugfs_dir;
}
#else
static inline void kgsl_core_debugfs_init(void) { }
static inline void kgsl_device_debugfs_init(struct kgsl_device *device) { }
static inline void kgsl_core_debugfs_close(void) { }
static inline struct dentry *kgsl_get_debugfs_dir(void) { return NULL; }
#endif
#endif

View File

@ -1,309 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_DEVICE_H
#define __KGSL_DEVICE_H
#include <linux/idr.h>
#include <linux/wakelock.h>
#include <linux/pm_qos_params.h>
#include <linux/earlysuspend.h>
#include "kgsl.h"
#include "kgsl_mmu.h"
#include "kgsl_pwrctrl.h"
#include "kgsl_log.h"
#include "kgsl_pwrscale.h"
#define KGSL_TIMEOUT_NONE 0
#define KGSL_TIMEOUT_DEFAULT 0xFFFFFFFF
#define FIRST_TIMEOUT (HZ / 2)
/* KGSL device state is initialized to INIT when platform_probe *
* sucessfully initialized the device. Once a device has been opened *
* (started) it becomes active. NAP implies that only low latency *
* resources (for now clocks on some platforms) are off. SLEEP implies *
* that the KGSL module believes a device is idle (has been inactive *
* past its timer) and all system resources are released. SUSPEND is *
* requested by the kernel and will be enforced upon all open devices. */
#define KGSL_STATE_NONE 0x00000000
#define KGSL_STATE_INIT 0x00000001
#define KGSL_STATE_ACTIVE 0x00000002
#define KGSL_STATE_NAP 0x00000004
#define KGSL_STATE_SLEEP 0x00000008
#define KGSL_STATE_SUSPEND 0x00000010
#define KGSL_STATE_HUNG 0x00000020
#define KGSL_STATE_DUMP_AND_RECOVER 0x00000040
#define KGSL_GRAPHICS_MEMORY_LOW_WATERMARK 0x1000000
#define KGSL_IS_PAGE_ALIGNED(addr) (!((addr) & (~PAGE_MASK)))
struct kgsl_device;
struct platform_device;
struct kgsl_device_private;
struct kgsl_context;
struct kgsl_power_stats;
struct kgsl_functable {
/* Mandatory functions - these functions must be implemented
by the client device. The driver will not check for a NULL
pointer before calling the hook.
*/
void (*regread) (struct kgsl_device *device,
unsigned int offsetwords, unsigned int *value);
void (*regwrite) (struct kgsl_device *device,
unsigned int offsetwords, unsigned int value);
int (*idle) (struct kgsl_device *device, unsigned int timeout);
unsigned int (*isidle) (struct kgsl_device *device);
int (*suspend_context) (struct kgsl_device *device);
int (*start) (struct kgsl_device *device, unsigned int init_ram);
int (*stop) (struct kgsl_device *device);
int (*getproperty) (struct kgsl_device *device,
enum kgsl_property_type type, void *value,
unsigned int sizebytes);
int (*waittimestamp) (struct kgsl_device *device,
unsigned int timestamp, unsigned int msecs);
unsigned int (*readtimestamp) (struct kgsl_device *device,
enum kgsl_timestamp_type type);
int (*issueibcmds) (struct kgsl_device_private *dev_priv,
struct kgsl_context *context, struct kgsl_ibdesc *ibdesc,
unsigned int sizedwords, uint32_t *timestamp,
unsigned int flags);
int (*setup_pt)(struct kgsl_device *device,
struct kgsl_pagetable *pagetable);
void (*cleanup_pt)(struct kgsl_device *device,
struct kgsl_pagetable *pagetable);
void (*power_stats)(struct kgsl_device *device,
struct kgsl_power_stats *stats);
void (*irqctrl)(struct kgsl_device *device, int state);
/* Optional functions - these functions are not mandatory. The
driver will check that the function pointer is not NULL before
calling the hook */
void (*setstate) (struct kgsl_device *device, uint32_t flags);
int (*drawctxt_create) (struct kgsl_device *device,
struct kgsl_pagetable *pagetable, struct kgsl_context *context,
uint32_t flags);
void (*drawctxt_destroy) (struct kgsl_device *device,
struct kgsl_context *context);
long (*ioctl) (struct kgsl_device_private *dev_priv,
unsigned int cmd, void *data);
};
struct kgsl_memregion {
unsigned char *mmio_virt_base;
unsigned int mmio_phys_base;
uint32_t gpu_base;
unsigned int sizebytes;
};
/* MH register values */
struct kgsl_mh {
unsigned int mharb;
unsigned int mh_intf_cfg1;
unsigned int mh_intf_cfg2;
uint32_t mpu_base;
int mpu_range;
};
struct kgsl_event {
uint32_t timestamp;
void (*func)(struct kgsl_device *, void *, u32);
void *priv;
struct list_head list;
};
struct kgsl_device {
struct device *dev;
const char *name;
unsigned int ver_major;
unsigned int ver_minor;
uint32_t flags;
enum kgsl_deviceid id;
struct kgsl_memregion regspace;
struct kgsl_memdesc memstore;
const char *iomemname;
struct kgsl_mh mh;
struct kgsl_mmu mmu;
struct completion hwaccess_gate;
const struct kgsl_functable *ftbl;
struct work_struct idle_check_ws;
struct timer_list idle_timer;
struct kgsl_pwrctrl pwrctrl;
int open_count;
struct atomic_notifier_head ts_notifier_list;
struct mutex mutex;
uint32_t state;
uint32_t requested_state;
struct list_head memqueue;
unsigned int active_cnt;
struct completion suspend_gate;
wait_queue_head_t wait_queue;
struct workqueue_struct *work_queue;
struct device *parentdev;
struct completion recovery_gate;
struct dentry *d_debugfs;
struct idr context_idr;
struct early_suspend display_off;
/* Logging levels */
int cmd_log;
int ctxt_log;
int drv_log;
int mem_log;
int pwr_log;
struct wake_lock idle_wakelock;
struct kgsl_pwrscale pwrscale;
struct kobject pwrscale_kobj;
struct work_struct ts_expired_ws;
struct list_head events;
};
struct kgsl_context {
uint32_t id;
/* Pointer to the owning device instance */
struct kgsl_device_private *dev_priv;
/* Pointer to the device specific context information */
void *devctxt;
};
struct kgsl_process_private {
unsigned int refcnt;
pid_t pid;
spinlock_t mem_lock;
struct list_head mem_list;
struct kgsl_pagetable *pagetable;
struct list_head list;
struct kobject *kobj;
struct {
unsigned int user;
unsigned int user_max;
unsigned int mapped;
unsigned int mapped_max;
unsigned int flushes;
} stats;
};
struct kgsl_device_private {
struct kgsl_device *device;
struct kgsl_process_private *process_priv;
};
struct kgsl_power_stats {
s64 total_time;
s64 busy_time;
};
struct kgsl_device *kgsl_get_device(int dev_idx);
static inline void kgsl_regread(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int *value)
{
device->ftbl->regread(device, offsetwords, value);
}
static inline void kgsl_regwrite(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int value)
{
device->ftbl->regwrite(device, offsetwords, value);
}
static inline int kgsl_idle(struct kgsl_device *device, unsigned int timeout)
{
return device->ftbl->idle(device, timeout);
}
static inline int kgsl_create_device_sysfs_files(struct device *root,
struct device_attribute **list)
{
int ret = 0, i;
for (i = 0; list[i] != NULL; i++)
ret |= device_create_file(root, list[i]);
return ret;
}
static inline void kgsl_remove_device_sysfs_files(struct device *root,
struct device_attribute **list)
{
int i;
for (i = 0; list[i] != NULL; i++)
device_remove_file(root, list[i]);
}
static inline struct kgsl_mmu *
kgsl_get_mmu(struct kgsl_device *device)
{
return (struct kgsl_mmu *) (device ? &device->mmu : NULL);
}
static inline struct kgsl_device *kgsl_device_from_dev(struct device *dev)
{
int i;
for (i = 0; i < KGSL_DEVICE_MAX; i++) {
if (kgsl_driver.devp[i] && kgsl_driver.devp[i]->dev == dev)
return kgsl_driver.devp[i];
}
return NULL;
}
static inline int kgsl_create_device_workqueue(struct kgsl_device *device)
{
device->work_queue = create_workqueue(device->name);
if (!device->work_queue) {
KGSL_DRV_ERR(device, "create_workqueue(%s) failed\n",
device->name);
return -EINVAL;
}
return 0;
}
static inline struct kgsl_context *
kgsl_find_context(struct kgsl_device_private *dev_priv, uint32_t id)
{
struct kgsl_context *ctxt =
idr_find(&dev_priv->device->context_idr, id);
/* Make sure that the context belongs to the current instance so
that other processes can't guess context IDs and mess things up */
return (ctxt && ctxt->dev_priv == dev_priv) ? ctxt : NULL;
}
int kgsl_check_timestamp(struct kgsl_device *device, unsigned int timestamp);
int kgsl_register_ts_notifier(struct kgsl_device *device,
struct notifier_block *nb);
int kgsl_unregister_ts_notifier(struct kgsl_device *device,
struct notifier_block *nb);
int kgsl_device_platform_probe(struct kgsl_device *device,
irqreturn_t (*dev_isr) (int, void*));
void kgsl_device_platform_remove(struct kgsl_device *device);
#endif /* __KGSL_DEVICE_H */

File diff suppressed because it is too large Load Diff

View File

@ -1,766 +0,0 @@
/* Copyright (c) 2011-2012, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/types.h>
#include <linux/device.h>
#include <linux/spinlock.h>
#include <linux/genalloc.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include "kgsl.h"
#include "kgsl_mmu.h"
#include "kgsl_device.h"
#include "kgsl_sharedmem.h"
#include "adreno_ringbuffer.h"
static ssize_t
sysfs_show_ptpool_entries(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_ptpool *pool = (struct kgsl_ptpool *)
kgsl_driver.ptpool;
return snprintf(buf, PAGE_SIZE, "%d\n", pool->entries);
}
static ssize_t
sysfs_show_ptpool_min(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_ptpool *pool = (struct kgsl_ptpool *)
kgsl_driver.ptpool;
return snprintf(buf, PAGE_SIZE, "%d\n",
pool->static_entries);
}
static ssize_t
sysfs_show_ptpool_chunks(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_ptpool *pool = (struct kgsl_ptpool *)
kgsl_driver.ptpool;
return snprintf(buf, PAGE_SIZE, "%d\n", pool->chunks);
}
static ssize_t
sysfs_show_ptpool_ptsize(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_ptpool *pool = (struct kgsl_ptpool *)
kgsl_driver.ptpool;
return snprintf(buf, PAGE_SIZE, "%d\n", pool->ptsize);
}
static struct kobj_attribute attr_ptpool_entries = {
.attr = { .name = "ptpool_entries", .mode = 0444 },
.show = sysfs_show_ptpool_entries,
.store = NULL,
};
static struct kobj_attribute attr_ptpool_min = {
.attr = { .name = "ptpool_min", .mode = 0444 },
.show = sysfs_show_ptpool_min,
.store = NULL,
};
static struct kobj_attribute attr_ptpool_chunks = {
.attr = { .name = "ptpool_chunks", .mode = 0444 },
.show = sysfs_show_ptpool_chunks,
.store = NULL,
};
static struct kobj_attribute attr_ptpool_ptsize = {
.attr = { .name = "ptpool_ptsize", .mode = 0444 },
.show = sysfs_show_ptpool_ptsize,
.store = NULL,
};
static struct attribute *ptpool_attrs[] = {
&attr_ptpool_entries.attr,
&attr_ptpool_min.attr,
&attr_ptpool_chunks.attr,
&attr_ptpool_ptsize.attr,
NULL,
};
static struct attribute_group ptpool_attr_group = {
.attrs = ptpool_attrs,
};
static int
_kgsl_ptpool_add_entries(struct kgsl_ptpool *pool, int count, int dynamic)
{
struct kgsl_ptpool_chunk *chunk;
size_t size = ALIGN(count * pool->ptsize, PAGE_SIZE);
BUG_ON(count == 0);
if (get_order(size) >= MAX_ORDER) {
KGSL_CORE_ERR("ptpool allocation is too big: %d\n", size);
return -EINVAL;
}
chunk = kzalloc(sizeof(*chunk), GFP_KERNEL);
if (chunk == NULL) {
KGSL_CORE_ERR("kzalloc(%d) failed\n", sizeof(*chunk));
return -ENOMEM;
}
chunk->size = size;
chunk->count = count;
chunk->dynamic = dynamic;
chunk->data = dma_alloc_coherent(NULL, size,
&chunk->phys, GFP_KERNEL);
if (chunk->data == NULL) {
KGSL_CORE_ERR("dma_alloc_coherent(%d) failed\n", size);
goto err;
}
chunk->bitmap = kzalloc(BITS_TO_LONGS(count) * 4, GFP_KERNEL);
if (chunk->bitmap == NULL) {
KGSL_CORE_ERR("kzalloc(%d) failed\n",
BITS_TO_LONGS(count) * 4);
goto err_dma;
}
list_add_tail(&chunk->list, &pool->list);
pool->chunks++;
pool->entries += count;
if (!dynamic)
pool->static_entries += count;
return 0;
err_dma:
dma_free_coherent(NULL, chunk->size, chunk->data, chunk->phys);
err:
kfree(chunk);
return -ENOMEM;
}
static void *
_kgsl_ptpool_get_entry(struct kgsl_ptpool *pool, unsigned int *physaddr)
{
struct kgsl_ptpool_chunk *chunk;
list_for_each_entry(chunk, &pool->list, list) {
int bit = find_first_zero_bit(chunk->bitmap, chunk->count);
if (bit >= chunk->count)
continue;
set_bit(bit, chunk->bitmap);
*physaddr = chunk->phys + (bit * pool->ptsize);
return chunk->data + (bit * pool->ptsize);
}
return NULL;
}
/**
* kgsl_ptpool_add
* @pool: A pointer to a ptpool structure
* @entries: Number of entries to add
*
* Add static entries to the pagetable pool.
*/
static int
kgsl_ptpool_add(struct kgsl_ptpool *pool, int count)
{
int ret = 0;
BUG_ON(count == 0);
mutex_lock(&pool->lock);
/* Only 4MB can be allocated in one chunk, so larger allocations
need to be split into multiple sections */
while (count) {
int entries = ((count * pool->ptsize) > SZ_4M) ?
SZ_4M / pool->ptsize : count;
/* Add the entries as static, i.e. they don't ever stand
a chance of being removed */
ret = _kgsl_ptpool_add_entries(pool, entries, 0);
if (ret)
break;
count -= entries;
}
mutex_unlock(&pool->lock);
return ret;
}
/**
* kgsl_ptpool_alloc
* @pool: A pointer to a ptpool structure
* @addr: A pointer to store the physical address of the chunk
*
* Allocate a pagetable from the pool. Returns the virtual address
* of the pagetable, the physical address is returned in physaddr
*/
static void *kgsl_ptpool_alloc(struct kgsl_ptpool *pool,
unsigned int *physaddr)
{
void *addr = NULL;
int ret;
mutex_lock(&pool->lock);
addr = _kgsl_ptpool_get_entry(pool, physaddr);
if (addr)
goto done;
/* Add a chunk for 1 more pagetable and mark it as dynamic */
ret = _kgsl_ptpool_add_entries(pool, 1, 1);
if (ret)
goto done;
addr = _kgsl_ptpool_get_entry(pool, physaddr);
done:
mutex_unlock(&pool->lock);
return addr;
}
static inline void _kgsl_ptpool_rm_chunk(struct kgsl_ptpool_chunk *chunk)
{
list_del(&chunk->list);
if (chunk->data)
dma_free_coherent(NULL, chunk->size, chunk->data,
chunk->phys);
kfree(chunk->bitmap);
kfree(chunk);
}
/**
* kgsl_ptpool_free
* @pool: A pointer to a ptpool structure
* @addr: A pointer to the virtual address to free
*
* Free a pagetable allocated from the pool
*/
static void kgsl_ptpool_free(struct kgsl_ptpool *pool, void *addr)
{
struct kgsl_ptpool_chunk *chunk, *tmp;
if (pool == NULL || addr == NULL)
return;
mutex_lock(&pool->lock);
list_for_each_entry_safe(chunk, tmp, &pool->list, list) {
if (addr >= chunk->data &&
addr < chunk->data + chunk->size) {
int bit = ((unsigned long) (addr - chunk->data)) /
pool->ptsize;
clear_bit(bit, chunk->bitmap);
memset(addr, 0, pool->ptsize);
if (chunk->dynamic &&
bitmap_empty(chunk->bitmap, chunk->count))
_kgsl_ptpool_rm_chunk(chunk);
break;
}
}
mutex_unlock(&pool->lock);
}
void kgsl_gpummu_ptpool_destroy(void *ptpool)
{
struct kgsl_ptpool *pool = (struct kgsl_ptpool *)ptpool;
struct kgsl_ptpool_chunk *chunk, *tmp;
if (pool == NULL)
return;
mutex_lock(&pool->lock);
list_for_each_entry_safe(chunk, tmp, &pool->list, list)
_kgsl_ptpool_rm_chunk(chunk);
mutex_unlock(&pool->lock);
kfree(pool);
}
/**
* kgsl_ptpool_init
* @pool: A pointer to a ptpool structure to initialize
* @ptsize: The size of each pagetable entry
* @entries: The number of inital entries to add to the pool
*
* Initalize a pool and allocate an initial chunk of entries.
*/
void *kgsl_gpummu_ptpool_init(int ptsize, int entries)
{
struct kgsl_ptpool *pool;
int ret = 0;
BUG_ON(ptsize == 0);
pool = kzalloc(sizeof(struct kgsl_ptpool), GFP_KERNEL);
if (!pool) {
KGSL_CORE_ERR("Failed to allocate memory "
"for ptpool\n");
return NULL;
}
pool->ptsize = ptsize;
mutex_init(&pool->lock);
INIT_LIST_HEAD(&pool->list);
if (entries) {
ret = kgsl_ptpool_add(pool, entries);
if (ret)
goto err_ptpool_remove;
}
ret = sysfs_create_group(kgsl_driver.ptkobj, &ptpool_attr_group);
if (ret) {
KGSL_CORE_ERR("sysfs_create_group failed for ptpool "
"statistics: %d\n", ret);
goto err_ptpool_remove;
}
return (void *)pool;
err_ptpool_remove:
kgsl_gpummu_ptpool_destroy(pool);
return NULL;
}
int kgsl_gpummu_pt_equal(struct kgsl_pagetable *pt,
unsigned int pt_base)
{
struct kgsl_gpummu_pt *gpummu_pt = pt->priv;
return pt && pt_base && (gpummu_pt->base.gpuaddr == pt_base);
}
void kgsl_gpummu_destroy_pagetable(void *mmu_specific_pt)
{
struct kgsl_gpummu_pt *gpummu_pt = (struct kgsl_gpummu_pt *)
mmu_specific_pt;
kgsl_ptpool_free((struct kgsl_ptpool *)kgsl_driver.ptpool,
gpummu_pt->base.hostptr);
kgsl_driver.stats.coherent -= KGSL_PAGETABLE_SIZE;
kfree(gpummu_pt->tlbflushfilter.base);
kfree(gpummu_pt);
}
static inline uint32_t
kgsl_pt_entry_get(unsigned int va_base, uint32_t va)
{
return (va - va_base) >> PAGE_SHIFT;
}
static inline void
kgsl_pt_map_set(struct kgsl_gpummu_pt *pt, uint32_t pte, uint32_t val)
{
uint32_t *baseptr = (uint32_t *)pt->base.hostptr;
writel_relaxed(val, &baseptr[pte]);
}
static inline uint32_t
kgsl_pt_map_get(struct kgsl_gpummu_pt *pt, uint32_t pte)
{
uint32_t *baseptr = (uint32_t *)pt->base.hostptr;
return readl_relaxed(&baseptr[pte]) & GSL_PT_PAGE_ADDR_MASK;
}
static unsigned int kgsl_gpummu_pt_get_flags(struct kgsl_pagetable *pt,
enum kgsl_deviceid id)
{
unsigned int result = 0;
struct kgsl_gpummu_pt *gpummu_pt;
if (pt == NULL)
return 0;
gpummu_pt = pt->priv;
spin_lock(&pt->lock);
if (gpummu_pt->tlb_flags && (1<<id)) {
result = KGSL_MMUFLAGS_TLBFLUSH;
gpummu_pt->tlb_flags &= ~(1<<id);
}
spin_unlock(&pt->lock);
return result;
}
static void kgsl_gpummu_pagefault(struct kgsl_device *device)
{
unsigned int reg;
unsigned int ptbase;
kgsl_regread(device, MH_MMU_PAGE_FAULT, &reg);
kgsl_regread(device, MH_MMU_PT_BASE, &ptbase);
KGSL_MEM_CRIT(device,
"mmu page fault: page=0x%lx pt=%d op=%s axi=%d\n",
reg & ~(PAGE_SIZE - 1),
kgsl_mmu_get_ptname_from_ptbase(ptbase),
reg & 0x02 ? "WRITE" : "READ", (reg >> 4) & 0xF);
}
static void *kgsl_gpummu_create_pagetable(void)
{
struct kgsl_gpummu_pt *gpummu_pt;
gpummu_pt = kzalloc(sizeof(struct kgsl_gpummu_pt),
GFP_KERNEL);
if (!gpummu_pt)
return NULL;
gpummu_pt->tlb_flags = 0;
gpummu_pt->last_superpte = 0;
gpummu_pt->tlbflushfilter.size = (CONFIG_MSM_KGSL_PAGE_TABLE_SIZE /
(PAGE_SIZE * GSL_PT_SUPER_PTE * 8)) + 1;
gpummu_pt->tlbflushfilter.base = (unsigned int *)
kzalloc(gpummu_pt->tlbflushfilter.size, GFP_KERNEL);
if (!gpummu_pt->tlbflushfilter.base) {
KGSL_CORE_ERR("kzalloc(%d) failed\n",
gpummu_pt->tlbflushfilter.size);
goto err_free_gpummu;
}
GSL_TLBFLUSH_FILTER_RESET();
gpummu_pt->base.hostptr = kgsl_ptpool_alloc((struct kgsl_ptpool *)
kgsl_driver.ptpool,
&gpummu_pt->base.physaddr);
if (gpummu_pt->base.hostptr == NULL)
goto err_flushfilter;
/* ptpool allocations are from coherent memory, so update the
device statistics acordingly */
KGSL_STATS_ADD(KGSL_PAGETABLE_SIZE, kgsl_driver.stats.coherent,
kgsl_driver.stats.coherent_max);
gpummu_pt->base.gpuaddr = gpummu_pt->base.physaddr;
gpummu_pt->base.size = KGSL_PAGETABLE_SIZE;
return (void *)gpummu_pt;
err_flushfilter:
kfree(gpummu_pt->tlbflushfilter.base);
err_free_gpummu:
kfree(gpummu_pt);
return NULL;
}
static void kgsl_gpummu_default_setstate(struct kgsl_device *device,
uint32_t flags)
{
struct kgsl_gpummu_pt *gpummu_pt;
if (!kgsl_mmu_enabled())
return;
if (flags & KGSL_MMUFLAGS_PTUPDATE) {
kgsl_idle(device, KGSL_TIMEOUT_DEFAULT);
gpummu_pt = device->mmu.hwpagetable->priv;
kgsl_regwrite(device, MH_MMU_PT_BASE,
gpummu_pt->base.gpuaddr);
}
if (flags & KGSL_MMUFLAGS_TLBFLUSH) {
/* Invalidate all and tc */
kgsl_regwrite(device, MH_MMU_INVALIDATE, 0x00000003);
}
}
static void kgsl_gpummu_setstate(struct kgsl_device *device,
struct kgsl_pagetable *pagetable)
{
struct kgsl_mmu *mmu = &device->mmu;
struct kgsl_gpummu_pt *gpummu_pt;
if (mmu->flags & KGSL_FLAGS_STARTED) {
/* page table not current, then setup mmu to use new
* specified page table
*/
if (mmu->hwpagetable != pagetable) {
mmu->hwpagetable = pagetable;
spin_lock(&mmu->hwpagetable->lock);
gpummu_pt = mmu->hwpagetable->priv;
gpummu_pt->tlb_flags &= ~(1<<device->id);
spin_unlock(&mmu->hwpagetable->lock);
/* call device specific set page table */
kgsl_setstate(mmu->device, KGSL_MMUFLAGS_TLBFLUSH |
KGSL_MMUFLAGS_PTUPDATE);
}
}
}
static int kgsl_gpummu_init(struct kgsl_device *device)
{
/*
* intialize device mmu
*
* call this with the global lock held
*/
int status = 0;
struct kgsl_mmu *mmu = &device->mmu;
mmu->device = device;
/* sub-client MMU lookups require address translation */
if ((mmu->config & ~0x1) > 0) {
/*make sure virtual address range is a multiple of 64Kb */
if (CONFIG_MSM_KGSL_PAGE_TABLE_SIZE & ((1 << 16) - 1)) {
KGSL_CORE_ERR("Invalid pagetable size requested "
"for GPUMMU: %x\n", CONFIG_MSM_KGSL_PAGE_TABLE_SIZE);
return -EINVAL;
}
/* allocate memory used for completing r/w operations that
* cannot be mapped by the MMU
*/
status = kgsl_allocate_contiguous(&mmu->setstate_memory, 64);
if (!status)
kgsl_sharedmem_set(&mmu->setstate_memory, 0, 0,
mmu->setstate_memory.size);
}
dev_info(device->dev, "|%s| MMU type set for device is GPUMMU\n",
__func__);
return status;
}
static int kgsl_gpummu_start(struct kgsl_device *device)
{
/*
* intialize device mmu
*
* call this with the global lock held
*/
struct kgsl_mmu *mmu = &device->mmu;
struct kgsl_gpummu_pt *gpummu_pt;
if (mmu->flags & KGSL_FLAGS_STARTED)
return 0;
/* MMU not enabled */
if ((mmu->config & 0x1) == 0)
return 0;
/* setup MMU and sub-client behavior */
kgsl_regwrite(device, MH_MMU_CONFIG, mmu->config);
/* idle device */
kgsl_idle(device, KGSL_TIMEOUT_DEFAULT);
/* enable axi interrupts */
kgsl_regwrite(device, MH_INTERRUPT_MASK,
GSL_MMU_INT_MASK | MH_INTERRUPT_MASK__MMU_PAGE_FAULT);
kgsl_sharedmem_set(&mmu->setstate_memory, 0, 0,
mmu->setstate_memory.size);
/* TRAN_ERROR needs a 32 byte (32 byte aligned) chunk of memory
* to complete transactions in case of an MMU fault. Note that
* we'll leave the bottom 32 bytes of the setstate_memory for other
* purposes (e.g. use it when dummy read cycles are needed
* for other blocks) */
kgsl_regwrite(device, MH_MMU_TRAN_ERROR,
mmu->setstate_memory.physaddr + 32);
if (mmu->defaultpagetable == NULL)
mmu->defaultpagetable =
kgsl_mmu_getpagetable(KGSL_MMU_GLOBAL_PT);
/* Return error if the default pagetable doesn't exist */
if (mmu->defaultpagetable == NULL)
return -ENOMEM;
mmu->hwpagetable = mmu->defaultpagetable;
gpummu_pt = mmu->hwpagetable->priv;
kgsl_regwrite(device, MH_MMU_PT_BASE,
gpummu_pt->base.gpuaddr);
kgsl_regwrite(device, MH_MMU_VA_RANGE,
(KGSL_PAGETABLE_BASE |
(CONFIG_MSM_KGSL_PAGE_TABLE_SIZE >> 16)));
kgsl_setstate(device, KGSL_MMUFLAGS_TLBFLUSH);
mmu->flags |= KGSL_FLAGS_STARTED;
return 0;
}
static int
kgsl_gpummu_unmap(void *mmu_specific_pt,
struct kgsl_memdesc *memdesc)
{
unsigned int numpages;
unsigned int pte, ptefirst, ptelast, superpte;
unsigned int range = memdesc->size;
struct kgsl_gpummu_pt *gpummu_pt = mmu_specific_pt;
/* All GPU addresses as assigned are page aligned, but some
functions purturb the gpuaddr with an offset, so apply the
mask here to make sure we have the right address */
unsigned int gpuaddr = memdesc->gpuaddr & KGSL_MMU_ALIGN_MASK;
numpages = (range >> PAGE_SHIFT);
if (range & (PAGE_SIZE - 1))
numpages++;
ptefirst = kgsl_pt_entry_get(KGSL_PAGETABLE_BASE, gpuaddr);
ptelast = ptefirst + numpages;
superpte = ptefirst - (ptefirst & (GSL_PT_SUPER_PTE-1));
GSL_TLBFLUSH_FILTER_SETDIRTY(superpte / GSL_PT_SUPER_PTE);
for (pte = ptefirst; pte < ptelast; pte++) {
#ifdef VERBOSE_DEBUG
/* check if PTE exists */
if (!kgsl_pt_map_get(gpummu_pt, pte))
KGSL_CORE_ERR("pt entry %x is already "
"unmapped for pagetable %p\n", pte, gpummu_pt);
#endif
kgsl_pt_map_set(gpummu_pt, pte, GSL_PT_PAGE_DIRTY);
superpte = pte - (pte & (GSL_PT_SUPER_PTE - 1));
if (pte == superpte)
GSL_TLBFLUSH_FILTER_SETDIRTY(superpte /
GSL_PT_SUPER_PTE);
}
/* Post all writes to the pagetable */
wmb();
return 0;
}
#define SUPERPTE_IS_DIRTY(_p) \
(((_p) & (GSL_PT_SUPER_PTE - 1)) == 0 && \
GSL_TLBFLUSH_FILTER_ISDIRTY((_p) / GSL_PT_SUPER_PTE))
static int
kgsl_gpummu_map(void *mmu_specific_pt,
struct kgsl_memdesc *memdesc,
unsigned int protflags)
{
unsigned int pte;
struct kgsl_gpummu_pt *gpummu_pt = mmu_specific_pt;
struct scatterlist *s;
int flushtlb = 0;
int i;
pte = kgsl_pt_entry_get(KGSL_PAGETABLE_BASE, memdesc->gpuaddr);
/* Flush the TLB if the first PTE isn't at the superpte boundary */
if (pte & (GSL_PT_SUPER_PTE - 1))
flushtlb = 1;
for_each_sg(memdesc->sg, s, memdesc->sglen, i) {
unsigned int paddr = sg_phys(s);
unsigned int j;
/* Each sg entry might be multiple pages long */
for (j = paddr; j < paddr + s->length; pte++, j += PAGE_SIZE) {
if (SUPERPTE_IS_DIRTY(pte))
flushtlb = 1;
kgsl_pt_map_set(gpummu_pt, pte, j | protflags);
}
}
/* Flush the TLB if the last PTE isn't at the superpte boundary */
if ((pte + 1) & (GSL_PT_SUPER_PTE - 1))
flushtlb = 1;
wmb();
if (flushtlb) {
/*set all devices as needing flushing*/
gpummu_pt->tlb_flags = UINT_MAX;
GSL_TLBFLUSH_FILTER_RESET();
}
return 0;
}
static int kgsl_gpummu_stop(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
kgsl_regwrite(device, MH_MMU_CONFIG, 0x00000000);
mmu->flags &= ~KGSL_FLAGS_STARTED;
return 0;
}
static int kgsl_gpummu_close(struct kgsl_device *device)
{
/*
* close device mmu
*
* call this with the global lock held
*/
struct kgsl_mmu *mmu = &device->mmu;
if (mmu->setstate_memory.gpuaddr)
kgsl_sharedmem_free(&mmu->setstate_memory);
if (mmu->defaultpagetable)
kgsl_mmu_putpagetable(mmu->defaultpagetable);
return 0;
}
static unsigned int
kgsl_gpummu_get_current_ptbase(struct kgsl_device *device)
{
unsigned int ptbase;
kgsl_regread(device, MH_MMU_PT_BASE, &ptbase);
return ptbase;
}
struct kgsl_mmu_ops gpummu_ops = {
.mmu_init = kgsl_gpummu_init,
.mmu_close = kgsl_gpummu_close,
.mmu_start = kgsl_gpummu_start,
.mmu_stop = kgsl_gpummu_stop,
.mmu_setstate = kgsl_gpummu_setstate,
.mmu_device_setstate = kgsl_gpummu_default_setstate,
.mmu_pagefault = kgsl_gpummu_pagefault,
.mmu_get_current_ptbase = kgsl_gpummu_get_current_ptbase,
};
struct kgsl_mmu_pt_ops gpummu_pt_ops = {
.mmu_map = kgsl_gpummu_map,
.mmu_unmap = kgsl_gpummu_unmap,
.mmu_create_pagetable = kgsl_gpummu_create_pagetable,
.mmu_destroy_pagetable = kgsl_gpummu_destroy_pagetable,
.mmu_pt_equal = kgsl_gpummu_pt_equal,
.mmu_pt_get_flags = kgsl_gpummu_pt_get_flags,
};

View File

@ -1,85 +0,0 @@
/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_GPUMMU_H
#define __KGSL_GPUMMU_H
#define GSL_PT_PAGE_BITS_MASK 0x00000007
#define GSL_PT_PAGE_ADDR_MASK PAGE_MASK
#define GSL_MMU_INT_MASK \
(MH_INTERRUPT_MASK__AXI_READ_ERROR | \
MH_INTERRUPT_MASK__AXI_WRITE_ERROR)
/* Macros to manage TLB flushing */
#define GSL_TLBFLUSH_FILTER_ENTRY_NUMBITS (sizeof(unsigned char) * 8)
#define GSL_TLBFLUSH_FILTER_GET(superpte) \
(*((unsigned char *) \
(((unsigned int)gpummu_pt->tlbflushfilter.base) \
+ (superpte / GSL_TLBFLUSH_FILTER_ENTRY_NUMBITS))))
#define GSL_TLBFLUSH_FILTER_SETDIRTY(superpte) \
(GSL_TLBFLUSH_FILTER_GET((superpte)) |= 1 << \
(superpte % GSL_TLBFLUSH_FILTER_ENTRY_NUMBITS))
#define GSL_TLBFLUSH_FILTER_ISDIRTY(superpte) \
(GSL_TLBFLUSH_FILTER_GET((superpte)) & \
(1 << (superpte % GSL_TLBFLUSH_FILTER_ENTRY_NUMBITS)))
#define GSL_TLBFLUSH_FILTER_RESET() memset(gpummu_pt->tlbflushfilter.base,\
0, gpummu_pt->tlbflushfilter.size)
extern struct kgsl_mmu_ops gpummu_ops;
extern struct kgsl_mmu_pt_ops gpummu_pt_ops;
struct kgsl_tlbflushfilter {
unsigned int *base;
unsigned int size;
};
struct kgsl_gpummu_pt {
struct kgsl_memdesc base;
unsigned int last_superpte;
unsigned int tlb_flags;
/* Maintain filter to manage tlb flushing */
struct kgsl_tlbflushfilter tlbflushfilter;
};
struct kgsl_ptpool_chunk {
size_t size;
unsigned int count;
int dynamic;
void *data;
unsigned int phys;
unsigned long *bitmap;
struct list_head list;
};
struct kgsl_ptpool {
size_t ptsize;
struct mutex lock;
struct list_head list;
int entries;
int static_entries;
int chunks;
};
void *kgsl_gpummu_ptpool_init(int ptsize,
int entries);
void kgsl_gpummu_ptpool_destroy(void *ptpool);
static inline unsigned int kgsl_pt_get_base_addr(struct kgsl_pagetable *pt)
{
struct kgsl_gpummu_pt *gpummu_pt = pt->priv;
return gpummu_pt->base.gpuaddr;
}
#endif /* __KGSL_GPUMMU_H */

View File

@ -1,333 +0,0 @@
/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/types.h>
#include <linux/device.h>
#include <linux/spinlock.h>
#include <linux/genalloc.h>
#include <linux/slab.h>
#include <linux/iommu.h>
#include <mach/iommu.h>
#include <linux/msm_kgsl.h>
#include "kgsl.h"
#include "kgsl_device.h"
#include "kgsl_mmu.h"
#include "kgsl_sharedmem.h"
struct kgsl_iommu {
struct device *iommu_user_dev;
int iommu_user_dev_attached;
struct device *iommu_priv_dev;
int iommu_priv_dev_attached;
};
static int kgsl_iommu_pt_equal(struct kgsl_pagetable *pt,
unsigned int pt_base)
{
struct iommu_domain *domain = pt->priv;
return pt && pt_base && ((unsigned int)domain == pt_base);
}
static void kgsl_iommu_destroy_pagetable(void *mmu_specific_pt)
{
struct iommu_domain *domain = mmu_specific_pt;
if (domain)
iommu_domain_free(domain);
}
void *kgsl_iommu_create_pagetable(void)
{
struct iommu_domain *domain = iommu_domain_alloc(0);
if (!domain)
KGSL_CORE_ERR("Failed to create iommu domain\n");
return domain;
}
static void kgsl_detach_pagetable_iommu_domain(struct kgsl_mmu *mmu)
{
struct iommu_domain *domain;
struct kgsl_iommu *iommu = mmu->priv;
BUG_ON(mmu->hwpagetable == NULL);
BUG_ON(mmu->hwpagetable->priv == NULL);
domain = mmu->hwpagetable->priv;
if (iommu->iommu_user_dev_attached) {
iommu_detach_device(domain, iommu->iommu_user_dev);
iommu->iommu_user_dev_attached = 0;
KGSL_MEM_INFO(mmu->device,
"iommu %p detached from user dev of MMU: %p\n",
domain, mmu);
}
if (iommu->iommu_priv_dev_attached) {
iommu_detach_device(domain, iommu->iommu_priv_dev);
iommu->iommu_priv_dev_attached = 0;
KGSL_MEM_INFO(mmu->device,
"iommu %p detached from priv dev of MMU: %p\n",
domain, mmu);
}
}
static int kgsl_attach_pagetable_iommu_domain(struct kgsl_mmu *mmu)
{
struct iommu_domain *domain;
int ret = 0;
struct kgsl_iommu *iommu = mmu->priv;
BUG_ON(mmu->hwpagetable == NULL);
BUG_ON(mmu->hwpagetable->priv == NULL);
domain = mmu->hwpagetable->priv;
if (iommu->iommu_user_dev && !iommu->iommu_user_dev_attached) {
ret = iommu_attach_device(domain, iommu->iommu_user_dev);
if (ret) {
KGSL_MEM_ERR(mmu->device,
"Failed to attach device, err %d\n", ret);
goto done;
}
iommu->iommu_user_dev_attached = 1;
KGSL_MEM_INFO(mmu->device,
"iommu %p attached to user dev of MMU: %p\n",
domain, mmu);
}
if (iommu->iommu_priv_dev && !iommu->iommu_priv_dev_attached) {
ret = iommu_attach_device(domain, iommu->iommu_priv_dev);
if (ret) {
KGSL_MEM_ERR(mmu->device,
"Failed to attach device, err %d\n", ret);
iommu_detach_device(domain, iommu->iommu_user_dev);
iommu->iommu_user_dev_attached = 0;
goto done;
}
iommu->iommu_priv_dev_attached = 1;
KGSL_MEM_INFO(mmu->device,
"iommu %p attached to priv dev of MMU: %p\n",
domain, mmu);
}
done:
return ret;
}
static int kgsl_get_iommu_ctxt(struct kgsl_iommu *iommu,
struct kgsl_device *device)
{
int status = 0;
struct platform_device *pdev =
container_of(device->parentdev, struct platform_device, dev);
struct kgsl_device_platform_data *pdata_dev = pdev->dev.platform_data;
if (pdata_dev->iommu_user_ctx_name)
iommu->iommu_user_dev = msm_iommu_get_ctx(
pdata_dev->iommu_user_ctx_name);
if (pdata_dev->iommu_priv_ctx_name)
iommu->iommu_priv_dev = msm_iommu_get_ctx(
pdata_dev->iommu_priv_ctx_name);
if (!iommu->iommu_user_dev) {
KGSL_CORE_ERR("Failed to get user iommu dev handle for "
"device %s\n",
pdata_dev->iommu_user_ctx_name);
status = -EINVAL;
}
return status;
}
static void kgsl_iommu_setstate(struct kgsl_device *device,
struct kgsl_pagetable *pagetable)
{
struct kgsl_mmu *mmu = &device->mmu;
if (mmu->flags & KGSL_FLAGS_STARTED) {
/* page table not current, then setup mmu to use new
* specified page table
*/
if (mmu->hwpagetable != pagetable) {
kgsl_idle(device, KGSL_TIMEOUT_DEFAULT);
kgsl_detach_pagetable_iommu_domain(mmu);
mmu->hwpagetable = pagetable;
if (mmu->hwpagetable)
kgsl_attach_pagetable_iommu_domain(mmu);
}
}
}
static int kgsl_iommu_init(struct kgsl_device *device)
{
/*
* intialize device mmu
*
* call this with the global lock held
*/
int status = 0;
struct kgsl_mmu *mmu = &device->mmu;
struct kgsl_iommu *iommu;
mmu->device = device;
iommu = kzalloc(sizeof(struct kgsl_iommu), GFP_KERNEL);
if (!iommu) {
KGSL_CORE_ERR("kzalloc(%d) failed\n",
sizeof(struct kgsl_iommu));
return -ENOMEM;
}
iommu->iommu_priv_dev_attached = 0;
iommu->iommu_user_dev_attached = 0;
status = kgsl_get_iommu_ctxt(iommu, device);
if (status) {
kfree(iommu);
iommu = NULL;
}
mmu->priv = iommu;
dev_info(device->dev, "|%s| MMU type set for device is IOMMU\n",
__func__);
return status;
}
static int kgsl_iommu_start(struct kgsl_device *device)
{
int status;
struct kgsl_mmu *mmu = &device->mmu;
if (mmu->flags & KGSL_FLAGS_STARTED)
return 0;
kgsl_regwrite(device, MH_MMU_CONFIG, 0x00000000);
if (mmu->defaultpagetable == NULL)
mmu->defaultpagetable =
kgsl_mmu_getpagetable(KGSL_MMU_GLOBAL_PT);
/* Return error if the default pagetable doesn't exist */
if (mmu->defaultpagetable == NULL)
return -ENOMEM;
mmu->hwpagetable = mmu->defaultpagetable;
status = kgsl_attach_pagetable_iommu_domain(mmu);
if (!status)
mmu->flags |= KGSL_FLAGS_STARTED;
return status;
}
static int
kgsl_iommu_unmap(void *mmu_specific_pt,
struct kgsl_memdesc *memdesc)
{
int ret;
unsigned int range = memdesc->size;
struct iommu_domain *domain = (struct iommu_domain *)
mmu_specific_pt;
/* All GPU addresses as assigned are page aligned, but some
functions purturb the gpuaddr with an offset, so apply the
mask here to make sure we have the right address */
unsigned int gpuaddr = memdesc->gpuaddr & KGSL_MMU_ALIGN_MASK;
if (range == 0 || gpuaddr == 0)
return 0;
ret = iommu_unmap_range(domain, gpuaddr, range);
if (ret)
KGSL_CORE_ERR("iommu_unmap_range(%p, %x, %d) failed "
"with err: %d\n", domain, gpuaddr,
range, ret);
return 0;
}
static int
kgsl_iommu_map(void *mmu_specific_pt,
struct kgsl_memdesc *memdesc,
unsigned int protflags)
{
int ret;
unsigned int iommu_virt_addr;
struct iommu_domain *domain = mmu_specific_pt;
BUG_ON(NULL == domain);
iommu_virt_addr = memdesc->gpuaddr;
ret = iommu_map_range(domain, iommu_virt_addr, memdesc->sg,
memdesc->size, MSM_IOMMU_ATTR_NONCACHED);
if (ret) {
KGSL_CORE_ERR("iommu_map_range(%p, %x, %p, %d, %d) "
"failed with err: %d\n", domain,
iommu_virt_addr, memdesc->sg, memdesc->size,
MSM_IOMMU_ATTR_NONCACHED, ret);
return ret;
}
return ret;
}
static int kgsl_iommu_stop(struct kgsl_device *device)
{
/*
* stop device mmu
*
* call this with the global lock held
*/
struct kgsl_mmu *mmu = &device->mmu;
if (mmu->flags & KGSL_FLAGS_STARTED) {
/* detach iommu attachment */
kgsl_detach_pagetable_iommu_domain(mmu);
mmu->flags &= ~KGSL_FLAGS_STARTED;
}
return 0;
}
static int kgsl_iommu_close(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
if (mmu->defaultpagetable)
kgsl_mmu_putpagetable(mmu->defaultpagetable);
return 0;
}
static unsigned int
kgsl_iommu_get_current_ptbase(struct kgsl_device *device)
{
/* Current base is always the hwpagetables domain as we
* do not use per process pagetables right not for iommu.
* This will change when we switch to per process pagetables.
*/
return (unsigned int)device->mmu.hwpagetable->priv;
}
struct kgsl_mmu_ops iommu_ops = {
.mmu_init = kgsl_iommu_init,
.mmu_close = kgsl_iommu_close,
.mmu_start = kgsl_iommu_start,
.mmu_stop = kgsl_iommu_stop,
.mmu_setstate = kgsl_iommu_setstate,
.mmu_device_setstate = NULL,
.mmu_pagefault = NULL,
.mmu_get_current_ptbase = kgsl_iommu_get_current_ptbase,
};
struct kgsl_mmu_pt_ops iommu_pt_ops = {
.mmu_map = kgsl_iommu_map,
.mmu_unmap = kgsl_iommu_unmap,
.mmu_create_pagetable = kgsl_iommu_create_pagetable,
.mmu_destroy_pagetable = kgsl_iommu_destroy_pagetable,
.mmu_pt_equal = kgsl_iommu_pt_equal,
.mmu_pt_get_flags = NULL,
};

View File

@ -1,102 +0,0 @@
/* Copyright (c) 2002,2008-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_LOG_H
#define __KGSL_LOG_H
extern unsigned int kgsl_cff_dump_enable;
#define KGSL_LOG_INFO(dev, lvl, fmt, args...) \
do { \
if ((lvl) >= 6) \
dev_info(dev, "|%s| " fmt, \
__func__, ##args);\
} while (0)
#define KGSL_LOG_WARN(dev, lvl, fmt, args...) \
do { \
if ((lvl) >= 4) \
dev_warn(dev, "|%s| " fmt, \
__func__, ##args);\
} while (0)
#define KGSL_LOG_ERR(dev, lvl, fmt, args...) \
do { \
if ((lvl) >= 3) \
dev_err(dev, "|%s| " fmt, \
__func__, ##args);\
} while (0)
#define KGSL_LOG_CRIT(dev, lvl, fmt, args...) \
do { \
if ((lvl) >= 2) \
dev_crit(dev, "|%s| " fmt, \
__func__, ##args);\
} while (0)
#define KGSL_LOG_POSTMORTEM_WRITE(_dev, fmt, args...) \
do { dev_crit(_dev->dev, fmt, ##args); } while (0)
#define KGSL_LOG_DUMP(_dev, fmt, args...) dev_err(_dev->dev, fmt, ##args)
#define KGSL_DRV_INFO(_dev, fmt, args...) \
KGSL_LOG_INFO(_dev->dev, _dev->drv_log, fmt, ##args)
#define KGSL_DRV_WARN(_dev, fmt, args...) \
KGSL_LOG_WARN(_dev->dev, _dev->drv_log, fmt, ##args)
#define KGSL_DRV_ERR(_dev, fmt, args...) \
KGSL_LOG_ERR(_dev->dev, _dev->drv_log, fmt, ##args)
#define KGSL_DRV_CRIT(_dev, fmt, args...) \
KGSL_LOG_CRIT(_dev->dev, _dev->drv_log, fmt, ##args)
#define KGSL_CMD_INFO(_dev, fmt, args...) \
KGSL_LOG_INFO(_dev->dev, _dev->cmd_log, fmt, ##args)
#define KGSL_CMD_WARN(_dev, fmt, args...) \
KGSL_LOG_WARN(_dev->dev, _dev->cmd_log, fmt, ##args)
#define KGSL_CMD_ERR(_dev, fmt, args...) \
KGSL_LOG_ERR(_dev->dev, _dev->cmd_log, fmt, ##args)
#define KGSL_CMD_CRIT(_dev, fmt, args...) \
KGSL_LOG_CRIT(_dev->dev, _dev->cmd_log, fmt, ##args)
#define KGSL_CTXT_INFO(_dev, fmt, args...) \
KGSL_LOG_INFO(_dev->dev, _dev->ctxt_log, fmt, ##args)
#define KGSL_CTXT_WARN(_dev, fmt, args...) \
KGSL_LOG_WARN(_dev->dev, _dev->ctxt_log, fmt, ##args)
#define KGSL_CTXT_ERR(_dev, fmt, args...) \
KGSL_LOG_ERR(_dev->dev, _dev->ctxt_log, fmt, ##args)
#define KGSL_CTXT_CRIT(_dev, fmt, args...) \
KGSL_LOG_CRIT(_dev->dev, _dev->ctxt_log, fmt, ##args)
#define KGSL_MEM_INFO(_dev, fmt, args...) \
KGSL_LOG_INFO(_dev->dev, _dev->mem_log, fmt, ##args)
#define KGSL_MEM_WARN(_dev, fmt, args...) \
KGSL_LOG_WARN(_dev->dev, _dev->mem_log, fmt, ##args)
#define KGSL_MEM_ERR(_dev, fmt, args...) \
KGSL_LOG_ERR(_dev->dev, _dev->mem_log, fmt, ##args)
#define KGSL_MEM_CRIT(_dev, fmt, args...) \
KGSL_LOG_CRIT(_dev->dev, _dev->mem_log, fmt, ##args)
#define KGSL_PWR_INFO(_dev, fmt, args...) \
KGSL_LOG_INFO(_dev->dev, _dev->pwr_log, fmt, ##args)
#define KGSL_PWR_WARN(_dev, fmt, args...) \
KGSL_LOG_WARN(_dev->dev, _dev->pwr_log, fmt, ##args)
#define KGSL_PWR_ERR(_dev, fmt, args...) \
KGSL_LOG_ERR(_dev->dev, _dev->pwr_log, fmt, ##args)
#define KGSL_PWR_CRIT(_dev, fmt, args...) \
KGSL_LOG_CRIT(_dev->dev, _dev->pwr_log, fmt, ##args)
/* Core error messages - these are for core KGSL functions that have
no device associated with them (such as memory) */
#define KGSL_CORE_ERR(fmt, args...) \
pr_err("kgsl: %s: " fmt, __func__, ##args)
#endif /* __KGSL_LOG_H */

View File

@ -1,720 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/types.h>
#include <linux/device.h>
#include <linux/spinlock.h>
#include <linux/genalloc.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/iommu.h>
#include "kgsl.h"
#include "kgsl_mmu.h"
#include "kgsl_device.h"
#include "kgsl_sharedmem.h"
#define KGSL_MMU_ALIGN_SHIFT 13
#define KGSL_MMU_ALIGN_MASK (~((1 << KGSL_MMU_ALIGN_SHIFT) - 1))
static enum kgsl_mmutype kgsl_mmu_type;
static void pagetable_remove_sysfs_objects(struct kgsl_pagetable *pagetable);
static int kgsl_cleanup_pt(struct kgsl_pagetable *pt)
{
int i;
for (i = 0; i < KGSL_DEVICE_MAX; i++) {
struct kgsl_device *device = kgsl_driver.devp[i];
if (device)
device->ftbl->cleanup_pt(device, pt);
}
return 0;
}
static void kgsl_destroy_pagetable(struct kref *kref)
{
struct kgsl_pagetable *pagetable = container_of(kref,
struct kgsl_pagetable, refcount);
unsigned long flags;
spin_lock_irqsave(&kgsl_driver.ptlock, flags);
list_del(&pagetable->list);
spin_unlock_irqrestore(&kgsl_driver.ptlock, flags);
pagetable_remove_sysfs_objects(pagetable);
kgsl_cleanup_pt(pagetable);
if (pagetable->pool)
gen_pool_destroy(pagetable->pool);
pagetable->pt_ops->mmu_destroy_pagetable(pagetable->priv);
kfree(pagetable);
}
static inline void kgsl_put_pagetable(struct kgsl_pagetable *pagetable)
{
if (pagetable)
kref_put(&pagetable->refcount, kgsl_destroy_pagetable);
}
static struct kgsl_pagetable *
kgsl_get_pagetable(unsigned long name)
{
struct kgsl_pagetable *pt, *ret = NULL;
unsigned long flags;
spin_lock_irqsave(&kgsl_driver.ptlock, flags);
list_for_each_entry(pt, &kgsl_driver.pagetable_list, list) {
if (pt->name == name) {
ret = pt;
kref_get(&ret->refcount);
break;
}
}
spin_unlock_irqrestore(&kgsl_driver.ptlock, flags);
return ret;
}
static struct kgsl_pagetable *
_get_pt_from_kobj(struct kobject *kobj)
{
unsigned long ptname;
if (!kobj)
return NULL;
if (sscanf(kobj->name, "%ld", &ptname) != 1)
return NULL;
return kgsl_get_pagetable(ptname);
}
static ssize_t
sysfs_show_entries(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_pagetable *pt;
int ret = 0;
pt = _get_pt_from_kobj(kobj);
if (pt)
ret += snprintf(buf, PAGE_SIZE, "%d\n", pt->stats.entries);
kgsl_put_pagetable(pt);
return ret;
}
static ssize_t
sysfs_show_mapped(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_pagetable *pt;
int ret = 0;
pt = _get_pt_from_kobj(kobj);
if (pt)
ret += snprintf(buf, PAGE_SIZE, "%d\n", pt->stats.mapped);
kgsl_put_pagetable(pt);
return ret;
}
static ssize_t
sysfs_show_va_range(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_pagetable *pt;
int ret = 0;
pt = _get_pt_from_kobj(kobj);
if (pt)
ret += snprintf(buf, PAGE_SIZE, "0x%x\n",
CONFIG_MSM_KGSL_PAGE_TABLE_SIZE);
kgsl_put_pagetable(pt);
return ret;
}
static ssize_t
sysfs_show_max_mapped(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_pagetable *pt;
int ret = 0;
pt = _get_pt_from_kobj(kobj);
if (pt)
ret += snprintf(buf, PAGE_SIZE, "%d\n", pt->stats.max_mapped);
kgsl_put_pagetable(pt);
return ret;
}
static ssize_t
sysfs_show_max_entries(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_pagetable *pt;
int ret = 0;
pt = _get_pt_from_kobj(kobj);
if (pt)
ret += snprintf(buf, PAGE_SIZE, "%d\n", pt->stats.max_entries);
kgsl_put_pagetable(pt);
return ret;
}
static struct kobj_attribute attr_entries = {
.attr = { .name = "entries", .mode = 0444 },
.show = sysfs_show_entries,
.store = NULL,
};
static struct kobj_attribute attr_mapped = {
.attr = { .name = "mapped", .mode = 0444 },
.show = sysfs_show_mapped,
.store = NULL,
};
static struct kobj_attribute attr_va_range = {
.attr = { .name = "va_range", .mode = 0444 },
.show = sysfs_show_va_range,
.store = NULL,
};
static struct kobj_attribute attr_max_mapped = {
.attr = { .name = "max_mapped", .mode = 0444 },
.show = sysfs_show_max_mapped,
.store = NULL,
};
static struct kobj_attribute attr_max_entries = {
.attr = { .name = "max_entries", .mode = 0444 },
.show = sysfs_show_max_entries,
.store = NULL,
};
static struct attribute *pagetable_attrs[] = {
&attr_entries.attr,
&attr_mapped.attr,
&attr_va_range.attr,
&attr_max_mapped.attr,
&attr_max_entries.attr,
NULL,
};
static struct attribute_group pagetable_attr_group = {
.attrs = pagetable_attrs,
};
static void
pagetable_remove_sysfs_objects(struct kgsl_pagetable *pagetable)
{
if (pagetable->kobj)
sysfs_remove_group(pagetable->kobj,
&pagetable_attr_group);
kobject_put(pagetable->kobj);
}
static int
pagetable_add_sysfs_objects(struct kgsl_pagetable *pagetable)
{
char ptname[16];
int ret = -ENOMEM;
snprintf(ptname, sizeof(ptname), "%d", pagetable->name);
pagetable->kobj = kobject_create_and_add(ptname,
kgsl_driver.ptkobj);
if (pagetable->kobj == NULL)
goto err;
ret = sysfs_create_group(pagetable->kobj, &pagetable_attr_group);
err:
if (ret) {
if (pagetable->kobj)
kobject_put(pagetable->kobj);
pagetable->kobj = NULL;
}
return ret;
}
unsigned int kgsl_mmu_get_current_ptbase(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return 0;
else
return mmu->mmu_ops->mmu_get_current_ptbase(device);
}
EXPORT_SYMBOL(kgsl_mmu_get_current_ptbase);
int
kgsl_mmu_get_ptname_from_ptbase(unsigned int pt_base)
{
struct kgsl_pagetable *pt;
int ptid = -1;
spin_lock(&kgsl_driver.ptlock);
list_for_each_entry(pt, &kgsl_driver.pagetable_list, list) {
if (pt->pt_ops->mmu_pt_equal(pt, pt_base)) {
ptid = (int) pt->name;
break;
}
}
spin_unlock(&kgsl_driver.ptlock);
return ptid;
}
EXPORT_SYMBOL(kgsl_mmu_get_ptname_from_ptbase);
void kgsl_mmu_setstate(struct kgsl_device *device,
struct kgsl_pagetable *pagetable)
{
struct kgsl_mmu *mmu = &device->mmu;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return;
else
mmu->mmu_ops->mmu_setstate(device,
pagetable);
}
EXPORT_SYMBOL(kgsl_mmu_setstate);
int kgsl_mmu_init(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
mmu->device = device;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type ||
KGSL_MMU_TYPE_IOMMU == kgsl_mmu_type) {
dev_info(device->dev, "|%s| MMU type set for device is "
"NOMMU\n", __func__);
return 0;
} else if (KGSL_MMU_TYPE_GPU == kgsl_mmu_type)
mmu->mmu_ops = &gpummu_ops;
return mmu->mmu_ops->mmu_init(device);
}
EXPORT_SYMBOL(kgsl_mmu_init);
int kgsl_mmu_start(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
if (kgsl_mmu_type == KGSL_MMU_TYPE_NONE) {
kgsl_regwrite(device, MH_MMU_CONFIG, 0);
return 0;
} else {
return mmu->mmu_ops->mmu_start(device);
}
}
EXPORT_SYMBOL(kgsl_mmu_start);
void kgsl_mh_intrcallback(struct kgsl_device *device)
{
unsigned int status = 0;
unsigned int reg;
kgsl_regread(device, MH_INTERRUPT_STATUS, &status);
kgsl_regread(device, MH_AXI_ERROR, &reg);
if (status & MH_INTERRUPT_MASK__AXI_READ_ERROR)
KGSL_MEM_CRIT(device, "axi read error interrupt: %08x\n", reg);
if (status & MH_INTERRUPT_MASK__AXI_WRITE_ERROR)
KGSL_MEM_CRIT(device, "axi write error interrupt: %08x\n", reg);
if (status & MH_INTERRUPT_MASK__MMU_PAGE_FAULT)
device->mmu.mmu_ops->mmu_pagefault(device);
status &= KGSL_MMU_INT_MASK;
kgsl_regwrite(device, MH_INTERRUPT_CLEAR, status);
}
EXPORT_SYMBOL(kgsl_mh_intrcallback);
static int kgsl_setup_pt(struct kgsl_pagetable *pt)
{
int i = 0;
int status = 0;
for (i = 0; i < KGSL_DEVICE_MAX; i++) {
struct kgsl_device *device = kgsl_driver.devp[i];
if (device) {
status = device->ftbl->setup_pt(device, pt);
if (status)
goto error_pt;
}
}
return status;
error_pt:
while (i >= 0) {
struct kgsl_device *device = kgsl_driver.devp[i];
if (device)
device->ftbl->cleanup_pt(device, pt);
i--;
}
return status;
}
static struct kgsl_pagetable *kgsl_mmu_createpagetableobject(
unsigned int name)
{
int status = 0;
struct kgsl_pagetable *pagetable = NULL;
unsigned long flags;
pagetable = kzalloc(sizeof(struct kgsl_pagetable), GFP_KERNEL);
if (pagetable == NULL) {
KGSL_CORE_ERR("kzalloc(%d) failed\n",
sizeof(struct kgsl_pagetable));
return NULL;
}
kref_init(&pagetable->refcount);
spin_lock_init(&pagetable->lock);
pagetable->name = name;
pagetable->max_entries = KGSL_PAGETABLE_ENTRIES(
CONFIG_MSM_KGSL_PAGE_TABLE_SIZE);
pagetable->pool = gen_pool_create(PAGE_SHIFT, -1);
if (pagetable->pool == NULL) {
KGSL_CORE_ERR("gen_pool_create(%d) failed\n", PAGE_SHIFT);
goto err_alloc;
}
if (gen_pool_add(pagetable->pool, KGSL_PAGETABLE_BASE,
CONFIG_MSM_KGSL_PAGE_TABLE_SIZE, -1)) {
KGSL_CORE_ERR("gen_pool_add failed\n");
goto err_pool;
}
if (KGSL_MMU_TYPE_GPU == kgsl_mmu_type)
pagetable->pt_ops = &gpummu_pt_ops;
pagetable->priv = pagetable->pt_ops->mmu_create_pagetable();
if (!pagetable->priv)
goto err_pool;
status = kgsl_setup_pt(pagetable);
if (status)
goto err_mmu_create;
spin_lock_irqsave(&kgsl_driver.ptlock, flags);
list_add(&pagetable->list, &kgsl_driver.pagetable_list);
spin_unlock_irqrestore(&kgsl_driver.ptlock, flags);
/* Create the sysfs entries */
pagetable_add_sysfs_objects(pagetable);
return pagetable;
err_mmu_create:
pagetable->pt_ops->mmu_destroy_pagetable(pagetable->priv);
err_pool:
gen_pool_destroy(pagetable->pool);
err_alloc:
kfree(pagetable);
return NULL;
}
struct kgsl_pagetable *kgsl_mmu_getpagetable(unsigned long name)
{
struct kgsl_pagetable *pt;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return (void *)(-1);
#ifdef CONFIG_KGSL_PER_PROCESS_PAGE_TABLE
#else
name = KGSL_MMU_GLOBAL_PT;
#endif
pt = kgsl_get_pagetable(name);
if (pt == NULL)
pt = kgsl_mmu_createpagetableobject(name);
return pt;
}
void kgsl_mmu_putpagetable(struct kgsl_pagetable *pagetable)
{
kgsl_put_pagetable(pagetable);
}
EXPORT_SYMBOL(kgsl_mmu_putpagetable);
void kgsl_setstate(struct kgsl_device *device, uint32_t flags)
{
struct kgsl_mmu *mmu = &device->mmu;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return;
else if (device->ftbl->setstate)
device->ftbl->setstate(device, flags);
else if (mmu->mmu_ops->mmu_device_setstate)
mmu->mmu_ops->mmu_device_setstate(device, flags);
}
EXPORT_SYMBOL(kgsl_setstate);
void kgsl_mmu_device_setstate(struct kgsl_device *device, uint32_t flags)
{
struct kgsl_mmu *mmu = &device->mmu;
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return;
else if (mmu->mmu_ops->mmu_device_setstate)
mmu->mmu_ops->mmu_device_setstate(device, flags);
}
EXPORT_SYMBOL(kgsl_mmu_device_setstate);
void kgsl_mh_start(struct kgsl_device *device)
{
struct kgsl_mh *mh = &device->mh;
/* force mmu off to for now*/
kgsl_regwrite(device, MH_MMU_CONFIG, 0);
kgsl_idle(device, KGSL_TIMEOUT_DEFAULT);
/* define physical memory range accessible by the core */
kgsl_regwrite(device, MH_MMU_MPU_BASE, mh->mpu_base);
kgsl_regwrite(device, MH_MMU_MPU_END,
mh->mpu_base + mh->mpu_range);
kgsl_regwrite(device, MH_ARBITER_CONFIG, mh->mharb);
if (mh->mh_intf_cfg1 != 0)
kgsl_regwrite(device, MH_CLNT_INTF_CTRL_CONFIG1,
mh->mh_intf_cfg1);
if (mh->mh_intf_cfg2 != 0)
kgsl_regwrite(device, MH_CLNT_INTF_CTRL_CONFIG2,
mh->mh_intf_cfg2);
/*
* Interrupts are enabled on a per-device level when
* kgsl_pwrctrl_irq() is called
*/
}
int
kgsl_mmu_map(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc,
unsigned int protflags)
{
int ret;
if (kgsl_mmu_type == KGSL_MMU_TYPE_NONE) {
memdesc->gpuaddr = memdesc->physaddr;
return 0;
}
memdesc->gpuaddr = gen_pool_alloc_aligned(pagetable->pool,
memdesc->size, KGSL_MMU_ALIGN_SHIFT);
if (memdesc->gpuaddr == 0) {
KGSL_CORE_ERR("gen_pool_alloc(%d) failed\n", memdesc->size);
KGSL_CORE_ERR(" [%d] allocated=%d, entries=%d\n",
pagetable->name, pagetable->stats.mapped,
pagetable->stats.entries);
return -ENOMEM;
}
spin_lock(&pagetable->lock);
ret = pagetable->pt_ops->mmu_map(pagetable->priv, memdesc, protflags);
if (ret)
goto err_free_gpuaddr;
/* Keep track of the statistics for the sysfs files */
KGSL_STATS_ADD(1, pagetable->stats.entries,
pagetable->stats.max_entries);
KGSL_STATS_ADD(memdesc->size, pagetable->stats.mapped,
pagetable->stats.max_mapped);
spin_unlock(&pagetable->lock);
return 0;
err_free_gpuaddr:
spin_unlock(&pagetable->lock);
gen_pool_free(pagetable->pool, memdesc->gpuaddr, memdesc->size);
memdesc->gpuaddr = 0;
return ret;
}
EXPORT_SYMBOL(kgsl_mmu_map);
int
kgsl_mmu_unmap(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc)
{
if (memdesc->size == 0 || memdesc->gpuaddr == 0)
return 0;
if (kgsl_mmu_type == KGSL_MMU_TYPE_NONE) {
memdesc->gpuaddr = 0;
return 0;
}
spin_lock(&pagetable->lock);
pagetable->pt_ops->mmu_unmap(pagetable->priv, memdesc);
/* Remove the statistics */
pagetable->stats.entries--;
pagetable->stats.mapped -= memdesc->size;
spin_unlock(&pagetable->lock);
gen_pool_free(pagetable->pool,
memdesc->gpuaddr & KGSL_MMU_ALIGN_MASK,
memdesc->size);
return 0;
}
EXPORT_SYMBOL(kgsl_mmu_unmap);
int kgsl_mmu_map_global(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc, unsigned int protflags)
{
int result = -EINVAL;
unsigned int gpuaddr = 0;
if (memdesc == NULL) {
KGSL_CORE_ERR("invalid memdesc\n");
goto error;
}
/* Not all global mappings are needed for all MMU types */
if (!memdesc->size)
return 0;
gpuaddr = memdesc->gpuaddr;
result = kgsl_mmu_map(pagetable, memdesc, protflags);
if (result)
goto error;
/*global mappings must have the same gpu address in all pagetables*/
if (gpuaddr && gpuaddr != memdesc->gpuaddr) {
KGSL_CORE_ERR("pt %p addr mismatch phys 0x%08x"
"gpu 0x%0x 0x%08x", pagetable, memdesc->physaddr,
gpuaddr, memdesc->gpuaddr);
goto error_unmap;
}
return result;
error_unmap:
kgsl_mmu_unmap(pagetable, memdesc);
error:
return result;
}
EXPORT_SYMBOL(kgsl_mmu_map_global);
int kgsl_mmu_stop(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
if (kgsl_mmu_type == KGSL_MMU_TYPE_NONE)
return 0;
else
return mmu->mmu_ops->mmu_stop(device);
}
EXPORT_SYMBOL(kgsl_mmu_stop);
int kgsl_mmu_close(struct kgsl_device *device)
{
struct kgsl_mmu *mmu = &device->mmu;
if (kgsl_mmu_type == KGSL_MMU_TYPE_NONE)
return 0;
else
return mmu->mmu_ops->mmu_close(device);
}
EXPORT_SYMBOL(kgsl_mmu_close);
int kgsl_mmu_pt_get_flags(struct kgsl_pagetable *pt,
enum kgsl_deviceid id)
{
if (KGSL_MMU_TYPE_GPU == kgsl_mmu_type)
return pt->pt_ops->mmu_pt_get_flags(pt, id);
else
return 0;
}
EXPORT_SYMBOL(kgsl_mmu_pt_get_flags);
void kgsl_mmu_ptpool_destroy(void *ptpool)
{
if (KGSL_MMU_TYPE_GPU == kgsl_mmu_type)
kgsl_gpummu_ptpool_destroy(ptpool);
ptpool = 0;
}
EXPORT_SYMBOL(kgsl_mmu_ptpool_destroy);
void *kgsl_mmu_ptpool_init(int ptsize, int entries)
{
if (KGSL_MMU_TYPE_GPU == kgsl_mmu_type)
return kgsl_gpummu_ptpool_init(ptsize, entries);
else
return (void *)(-1);
}
EXPORT_SYMBOL(kgsl_mmu_ptpool_init);
int kgsl_mmu_enabled(void)
{
if (KGSL_MMU_TYPE_NONE != kgsl_mmu_type)
return 1;
else
return 0;
}
EXPORT_SYMBOL(kgsl_mmu_enabled);
int kgsl_mmu_pt_equal(struct kgsl_pagetable *pt,
unsigned int pt_base)
{
if (KGSL_MMU_TYPE_NONE == kgsl_mmu_type)
return true;
else
return pt->pt_ops->mmu_pt_equal(pt, pt_base);
}
EXPORT_SYMBOL(kgsl_mmu_pt_equal);
enum kgsl_mmutype kgsl_mmu_get_mmutype(void)
{
return kgsl_mmu_type;
}
EXPORT_SYMBOL(kgsl_mmu_get_mmutype);
void kgsl_mmu_set_mmutype(char *mmutype)
{
kgsl_mmu_type = KGSL_MMU_TYPE_NONE;
#ifdef CONFIG_MSM_KGSL_GPUMMU
kgsl_mmu_type = KGSL_MMU_TYPE_GPU;
#elif defined(CONFIG_MSM_KGSL_IOMMU)
#endif
if (mmutype && !strncmp(mmutype, "gpummu", 6))
kgsl_mmu_type = KGSL_MMU_TYPE_GPU;
if (mmutype && !strncmp(mmutype, "nommu", 5))
kgsl_mmu_type = KGSL_MMU_TYPE_NONE;
}
EXPORT_SYMBOL(kgsl_mmu_set_mmutype);

View File

@ -1,193 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_MMU_H
#define __KGSL_MMU_H
#define KGSL_MMU_ALIGN_SHIFT 13
#define KGSL_MMU_ALIGN_MASK (~((1 << KGSL_MMU_ALIGN_SHIFT) - 1))
/* Identifier for the global page table */
/* Per process page tables will probably pass in the thread group
as an identifier */
#define KGSL_MMU_GLOBAL_PT 0
struct kgsl_device;
#define GSL_PT_SUPER_PTE 8
#define GSL_PT_PAGE_WV 0x00000001
#define GSL_PT_PAGE_RV 0x00000002
#define GSL_PT_PAGE_DIRTY 0x00000004
/* MMU registers - the register locations for all cores are the
same. The method for getting to those locations differs between
2D and 3D, but the 2D and 3D register functions do that magic
for us */
#define MH_MMU_CONFIG 0x0040
#define MH_MMU_VA_RANGE 0x0041
#define MH_MMU_PT_BASE 0x0042
#define MH_MMU_PAGE_FAULT 0x0043
#define MH_MMU_TRAN_ERROR 0x0044
#define MH_MMU_INVALIDATE 0x0045
#define MH_MMU_MPU_BASE 0x0046
#define MH_MMU_MPU_END 0x0047
#define MH_INTERRUPT_MASK 0x0A42
#define MH_INTERRUPT_STATUS 0x0A43
#define MH_INTERRUPT_CLEAR 0x0A44
#define MH_AXI_ERROR 0x0A45
#define MH_ARBITER_CONFIG 0x0A40
#define MH_DEBUG_CTRL 0x0A4E
#define MH_DEBUG_DATA 0x0A4F
#define MH_AXI_HALT_CONTROL 0x0A50
#define MH_CLNT_INTF_CTRL_CONFIG1 0x0A54
#define MH_CLNT_INTF_CTRL_CONFIG2 0x0A55
/* MH_MMU_CONFIG bit definitions */
#define MH_MMU_CONFIG__RB_W_CLNT_BEHAVIOR__SHIFT 0x00000004
#define MH_MMU_CONFIG__CP_W_CLNT_BEHAVIOR__SHIFT 0x00000006
#define MH_MMU_CONFIG__CP_R0_CLNT_BEHAVIOR__SHIFT 0x00000008
#define MH_MMU_CONFIG__CP_R1_CLNT_BEHAVIOR__SHIFT 0x0000000a
#define MH_MMU_CONFIG__CP_R2_CLNT_BEHAVIOR__SHIFT 0x0000000c
#define MH_MMU_CONFIG__CP_R3_CLNT_BEHAVIOR__SHIFT 0x0000000e
#define MH_MMU_CONFIG__CP_R4_CLNT_BEHAVIOR__SHIFT 0x00000010
#define MH_MMU_CONFIG__VGT_R0_CLNT_BEHAVIOR__SHIFT 0x00000012
#define MH_MMU_CONFIG__VGT_R1_CLNT_BEHAVIOR__SHIFT 0x00000014
#define MH_MMU_CONFIG__TC_R_CLNT_BEHAVIOR__SHIFT 0x00000016
#define MH_MMU_CONFIG__PA_W_CLNT_BEHAVIOR__SHIFT 0x00000018
/* MMU Flags */
#define KGSL_MMUFLAGS_TLBFLUSH 0x10000000
#define KGSL_MMUFLAGS_PTUPDATE 0x20000000
#define MH_INTERRUPT_MASK__AXI_READ_ERROR 0x00000001L
#define MH_INTERRUPT_MASK__AXI_WRITE_ERROR 0x00000002L
#define MH_INTERRUPT_MASK__MMU_PAGE_FAULT 0x00000004L
#ifdef CONFIG_MSM_KGSL_MMU
#define KGSL_MMU_INT_MASK \
(MH_INTERRUPT_MASK__AXI_READ_ERROR | \
MH_INTERRUPT_MASK__AXI_WRITE_ERROR | \
MH_INTERRUPT_MASK__MMU_PAGE_FAULT)
#else
#define KGSL_MMU_INT_MASK \
(MH_INTERRUPT_MASK__AXI_READ_ERROR | \
MH_INTERRUPT_MASK__AXI_WRITE_ERROR)
#endif
enum kgsl_mmutype {
KGSL_MMU_TYPE_GPU = 0,
KGSL_MMU_TYPE_IOMMU,
KGSL_MMU_TYPE_NONE
};
struct kgsl_pagetable {
spinlock_t lock;
struct kref refcount;
unsigned int max_entries;
struct gen_pool *pool;
struct list_head list;
unsigned int name;
struct kobject *kobj;
struct {
unsigned int entries;
unsigned int mapped;
unsigned int max_mapped;
unsigned int max_entries;
} stats;
const struct kgsl_mmu_pt_ops *pt_ops;
void *priv;
};
struct kgsl_mmu_ops {
int (*mmu_init) (struct kgsl_device *device);
int (*mmu_close) (struct kgsl_device *device);
int (*mmu_start) (struct kgsl_device *device);
int (*mmu_stop) (struct kgsl_device *device);
void (*mmu_setstate) (struct kgsl_device *device,
struct kgsl_pagetable *pagetable);
void (*mmu_device_setstate) (struct kgsl_device *device,
uint32_t flags);
void (*mmu_pagefault) (struct kgsl_device *device);
unsigned int (*mmu_get_current_ptbase)
(struct kgsl_device *device);
};
struct kgsl_mmu_pt_ops {
int (*mmu_map) (void *mmu_pt,
struct kgsl_memdesc *memdesc,
unsigned int protflags);
int (*mmu_unmap) (void *mmu_pt,
struct kgsl_memdesc *memdesc);
void *(*mmu_create_pagetable) (void);
void (*mmu_destroy_pagetable) (void *pt);
int (*mmu_pt_equal) (struct kgsl_pagetable *pt,
unsigned int pt_base);
unsigned int (*mmu_pt_get_flags) (struct kgsl_pagetable *pt,
enum kgsl_deviceid id);
};
struct kgsl_mmu {
unsigned int refcnt;
uint32_t flags;
struct kgsl_device *device;
unsigned int config;
struct kgsl_memdesc setstate_memory;
/* current page table object being used by device mmu */
struct kgsl_pagetable *defaultpagetable;
struct kgsl_pagetable *hwpagetable;
const struct kgsl_mmu_ops *mmu_ops;
void *priv;
};
#include "kgsl_gpummu.h"
extern struct kgsl_mmu_ops iommu_ops;
extern struct kgsl_mmu_pt_ops iommu_pt_ops;
struct kgsl_pagetable *kgsl_mmu_getpagetable(unsigned long name);
void kgsl_mmu_putpagetable(struct kgsl_pagetable *pagetable);
void kgsl_mh_start(struct kgsl_device *device);
void kgsl_mh_intrcallback(struct kgsl_device *device);
int kgsl_mmu_init(struct kgsl_device *device);
int kgsl_mmu_start(struct kgsl_device *device);
int kgsl_mmu_stop(struct kgsl_device *device);
int kgsl_mmu_close(struct kgsl_device *device);
int kgsl_mmu_map(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc,
unsigned int protflags);
int kgsl_mmu_map_global(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc, unsigned int protflags);
int kgsl_mmu_unmap(struct kgsl_pagetable *pagetable,
struct kgsl_memdesc *memdesc);
unsigned int kgsl_virtaddr_to_physaddr(void *virtaddr);
void kgsl_setstate(struct kgsl_device *device, uint32_t flags);
void kgsl_mmu_device_setstate(struct kgsl_device *device, uint32_t flags);
void kgsl_mmu_setstate(struct kgsl_device *device,
struct kgsl_pagetable *pt);
int kgsl_mmu_get_ptname_from_ptbase(unsigned int pt_base);
int kgsl_mmu_pt_get_flags(struct kgsl_pagetable *pt,
enum kgsl_deviceid id);
void kgsl_mmu_ptpool_destroy(void *ptpool);
void *kgsl_mmu_ptpool_init(int ptsize, int entries);
int kgsl_mmu_enabled(void);
int kgsl_mmu_pt_equal(struct kgsl_pagetable *pt,
unsigned int pt_base);
void kgsl_mmu_set_mmutype(char *mmutype);
unsigned int kgsl_mmu_get_current_ptbase(struct kgsl_device *device);
enum kgsl_mmutype kgsl_mmu_get_mmutype(void);
#endif /* __KGSL_MMU_H */

View File

@ -1,715 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/interrupt.h>
#include <linux/err.h>
#include <mach/msm_iomap.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
#include "kgsl_device.h"
#define KGSL_PWRFLAGS_POWER_ON 0
#define KGSL_PWRFLAGS_CLK_ON 1
#define KGSL_PWRFLAGS_AXI_ON 2
#define KGSL_PWRFLAGS_IRQ_ON 3
#define SWITCH_OFF 200
#define GPU_SWFI_LATENCY 3
#define UPDATE_BUSY_VAL 1000000
#define UPDATE_BUSY 50
void kgsl_pwrctrl_pwrlevel_change(struct kgsl_device *device,
unsigned int new_level)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
if (new_level < (pwr->num_pwrlevels - 1) &&
new_level >= pwr->thermal_pwrlevel &&
new_level != pwr->active_pwrlevel) {
pwr->active_pwrlevel = new_level;
if ((test_bit(KGSL_PWRFLAGS_CLK_ON, &pwr->power_flags)) ||
(device->state == KGSL_STATE_NAP))
clk_set_rate(pwr->grp_clks[0],
pwr->pwrlevels[pwr->active_pwrlevel].
gpu_freq);
if (test_bit(KGSL_PWRFLAGS_AXI_ON, &pwr->power_flags)) {
if (pwr->ebi1_clk)
clk_set_rate(pwr->ebi1_clk,
pwr->pwrlevels[pwr->active_pwrlevel].
bus_freq);
}
KGSL_PWR_WARN(device, "kgsl pwr level changed to %d\n",
pwr->active_pwrlevel);
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_pwrlevel_change);
static int __gpuclk_store(int max, struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{ int ret, i, delta = 5000000;
unsigned long val;
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_pwrctrl *pwr;
if (device == NULL)
return 0;
pwr = &device->pwrctrl;
ret = sscanf(buf, "%ld", &val);
if (ret != 1)
return count;
mutex_lock(&device->mutex);
for (i = 0; i < pwr->num_pwrlevels; i++) {
if (abs(pwr->pwrlevels[i].gpu_freq - val) < delta) {
if (max)
pwr->thermal_pwrlevel = i;
break;
}
}
if (i == pwr->num_pwrlevels)
goto done;
/*
* If the current or requested clock speed is greater than the
* thermal limit, bump down immediately.
*/
if (pwr->pwrlevels[pwr->active_pwrlevel].gpu_freq >
pwr->pwrlevels[pwr->thermal_pwrlevel].gpu_freq)
kgsl_pwrctrl_pwrlevel_change(device, pwr->thermal_pwrlevel);
else if (!max)
kgsl_pwrctrl_pwrlevel_change(device, i);
done:
mutex_unlock(&device->mutex);
return count;
}
static int kgsl_pwrctrl_max_gpuclk_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
return __gpuclk_store(1, dev, attr, buf, count);
}
static int kgsl_pwrctrl_max_gpuclk_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_pwrctrl *pwr;
if (device == NULL)
return 0;
pwr = &device->pwrctrl;
return snprintf(buf, PAGE_SIZE, "%d\n",
pwr->pwrlevels[pwr->thermal_pwrlevel].gpu_freq);
}
static int kgsl_pwrctrl_gpuclk_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
return __gpuclk_store(0, dev, attr, buf, count);
}
static int kgsl_pwrctrl_gpuclk_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_pwrctrl *pwr;
if (device == NULL)
return 0;
pwr = &device->pwrctrl;
return snprintf(buf, PAGE_SIZE, "%d\n",
pwr->pwrlevels[pwr->active_pwrlevel].gpu_freq);
}
static int kgsl_pwrctrl_pwrnap_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
char temp[20];
unsigned long val;
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_pwrctrl *pwr;
int rc;
if (device == NULL)
return 0;
pwr = &device->pwrctrl;
snprintf(temp, sizeof(temp), "%.*s",
(int)min(count, sizeof(temp) - 1), buf);
rc = strict_strtoul(temp, 0, &val);
if (rc)
return rc;
mutex_lock(&device->mutex);
if (val == 1)
pwr->nap_allowed = true;
else if (val == 0)
pwr->nap_allowed = false;
mutex_unlock(&device->mutex);
return count;
}
static int kgsl_pwrctrl_pwrnap_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct kgsl_device *device = kgsl_device_from_dev(dev);
if (device == NULL)
return 0;
return snprintf(buf, PAGE_SIZE, "%d\n", device->pwrctrl.nap_allowed);
}
static int kgsl_pwrctrl_idle_timer_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
char temp[20];
unsigned long val;
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_pwrctrl *pwr;
const long div = 1000/HZ;
static unsigned int org_interval_timeout = 1;
int rc;
if (device == NULL)
return 0;
pwr = &device->pwrctrl;
snprintf(temp, sizeof(temp), "%.*s",
(int)min(count, sizeof(temp) - 1), buf);
rc = strict_strtoul(temp, 0, &val);
if (rc)
return rc;
if (org_interval_timeout == 1)
org_interval_timeout = pwr->interval_timeout;
mutex_lock(&device->mutex);
/* Let the timeout be requested in ms, but convert to jiffies. */
val /= div;
if (val >= org_interval_timeout)
pwr->interval_timeout = val;
mutex_unlock(&device->mutex);
return count;
}
static int kgsl_pwrctrl_idle_timer_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct kgsl_device *device = kgsl_device_from_dev(dev);
if (device == NULL)
return 0;
return snprintf(buf, PAGE_SIZE, "%d\n",
device->pwrctrl.interval_timeout);
}
static int kgsl_pwrctrl_gpubusy_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
int ret;
struct kgsl_device *device = kgsl_device_from_dev(dev);
struct kgsl_busy *b = &device->pwrctrl.busy;
ret = snprintf(buf, 17, "%7d %7d\n",
b->on_time_old, b->time_old);
if (!test_bit(KGSL_PWRFLAGS_AXI_ON, &device->pwrctrl.power_flags)) {
b->on_time_old = 0;
b->time_old = 0;
}
return ret;
}
DEVICE_ATTR(gpuclk, 0644, kgsl_pwrctrl_gpuclk_show, kgsl_pwrctrl_gpuclk_store);
DEVICE_ATTR(max_gpuclk, 0644, kgsl_pwrctrl_max_gpuclk_show,
kgsl_pwrctrl_max_gpuclk_store);
DEVICE_ATTR(pwrnap, 0644, kgsl_pwrctrl_pwrnap_show, kgsl_pwrctrl_pwrnap_store);
DEVICE_ATTR(idle_timer, 0644, kgsl_pwrctrl_idle_timer_show,
kgsl_pwrctrl_idle_timer_store);
DEVICE_ATTR(gpubusy, 0644, kgsl_pwrctrl_gpubusy_show,
NULL);
static struct device_attribute *pwrctrl_attr_list[] = {
&dev_attr_gpuclk,
&dev_attr_max_gpuclk,
&dev_attr_pwrnap,
&dev_attr_idle_timer,
&dev_attr_gpubusy,
NULL
};
int kgsl_pwrctrl_init_sysfs(struct kgsl_device *device)
{
return kgsl_create_device_sysfs_files(device->dev, pwrctrl_attr_list);
}
void kgsl_pwrctrl_uninit_sysfs(struct kgsl_device *device)
{
kgsl_remove_device_sysfs_files(device->dev, pwrctrl_attr_list);
}
/* Track the amount of time the gpu is on vs the total system time. *
* Regularly update the percentage of busy time displayed by sysfs. */
static void kgsl_pwrctrl_busy_time(struct kgsl_device *device, bool on_time)
{
struct kgsl_busy *b = &device->pwrctrl.busy;
int elapsed;
if (b->start.tv_sec == 0)
do_gettimeofday(&(b->start));
do_gettimeofday(&(b->stop));
elapsed = (b->stop.tv_sec - b->start.tv_sec) * 1000000;
elapsed += b->stop.tv_usec - b->start.tv_usec;
b->time += elapsed;
if (on_time)
b->on_time += elapsed;
/* Update the output regularly and reset the counters. */
if ((b->time > UPDATE_BUSY_VAL) ||
!test_bit(KGSL_PWRFLAGS_AXI_ON, &device->pwrctrl.power_flags)) {
b->on_time_old = b->on_time;
b->time_old = b->time;
b->on_time = 0;
b->time = 0;
}
do_gettimeofday(&(b->start));
}
void kgsl_pwrctrl_clk(struct kgsl_device *device, int state)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
int i = 0;
if (state == KGSL_PWRFLAGS_OFF) {
if (test_and_clear_bit(KGSL_PWRFLAGS_CLK_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"clocks off, device %d\n", device->id);
for (i = KGSL_MAX_CLKS - 1; i > 0; i--)
if (pwr->grp_clks[i])
clk_disable(pwr->grp_clks[i]);
if ((pwr->pwrlevels[0].gpu_freq > 0) &&
(device->requested_state != KGSL_STATE_NAP))
clk_set_rate(pwr->grp_clks[0],
pwr->pwrlevels[pwr->num_pwrlevels - 1].
gpu_freq);
kgsl_pwrctrl_busy_time(device, true);
}
} else if (state == KGSL_PWRFLAGS_ON) {
if (!test_and_set_bit(KGSL_PWRFLAGS_CLK_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"clocks on, device %d\n", device->id);
if ((pwr->pwrlevels[0].gpu_freq > 0) &&
(device->state != KGSL_STATE_NAP))
clk_set_rate(pwr->grp_clks[0],
pwr->pwrlevels[pwr->active_pwrlevel].
gpu_freq);
/* as last step, enable grp_clk
this is to let GPU interrupt to come */
for (i = KGSL_MAX_CLKS - 1; i > 0; i--)
if (pwr->grp_clks[i])
clk_enable(pwr->grp_clks[i]);
kgsl_pwrctrl_busy_time(device, false);
}
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_clk);
void kgsl_pwrctrl_axi(struct kgsl_device *device, int state)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
if (state == KGSL_PWRFLAGS_OFF) {
if (test_and_clear_bit(KGSL_PWRFLAGS_AXI_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"axi off, device %d\n", device->id);
if (pwr->ebi1_clk) {
clk_set_rate(pwr->ebi1_clk, 0);
clk_disable(pwr->ebi1_clk);
}
}
} else if (state == KGSL_PWRFLAGS_ON) {
if (!test_and_set_bit(KGSL_PWRFLAGS_AXI_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"axi on, device %d\n", device->id);
if (pwr->ebi1_clk) {
clk_enable(pwr->ebi1_clk);
clk_set_rate(pwr->ebi1_clk,
pwr->pwrlevels[pwr->active_pwrlevel].
bus_freq);
}
}
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_axi);
void kgsl_pwrctrl_pwrrail(struct kgsl_device *device, int state)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
if (state == KGSL_PWRFLAGS_OFF) {
if (test_and_clear_bit(KGSL_PWRFLAGS_POWER_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"power off, device %d\n", device->id);
if (pwr->gpu_reg)
regulator_disable(pwr->gpu_reg);
}
} else if (state == KGSL_PWRFLAGS_ON) {
if (!test_and_set_bit(KGSL_PWRFLAGS_POWER_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"power on, device %d\n", device->id);
if (pwr->gpu_reg)
regulator_enable(pwr->gpu_reg);
}
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_pwrrail);
void kgsl_pwrctrl_irq(struct kgsl_device *device, int state)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
if (state == KGSL_PWRFLAGS_ON) {
if (!test_and_set_bit(KGSL_PWRFLAGS_IRQ_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"irq on, device %d\n", device->id);
enable_irq(pwr->interrupt_num);
device->ftbl->irqctrl(device, 1);
}
} else if (state == KGSL_PWRFLAGS_OFF) {
if (test_and_clear_bit(KGSL_PWRFLAGS_IRQ_ON,
&pwr->power_flags)) {
KGSL_PWR_INFO(device,
"irq off, device %d\n", device->id);
device->ftbl->irqctrl(device, 0);
if (in_interrupt())
disable_irq_nosync(pwr->interrupt_num);
else
disable_irq(pwr->interrupt_num);
}
}
}
EXPORT_SYMBOL(kgsl_pwrctrl_irq);
int kgsl_pwrctrl_init(struct kgsl_device *device)
{
int i, result = 0;
struct clk *clk;
struct platform_device *pdev =
container_of(device->parentdev, struct platform_device, dev);
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
struct kgsl_device_platform_data *pdata_dev = pdev->dev.platform_data;
struct kgsl_device_pwr_data *pdata_pwr = &pdata_dev->pwr_data;
const char *clk_names[KGSL_MAX_CLKS] = {pwr->src_clk_name,
pdata_dev->clk.name.clk,
pdata_dev->clk.name.pclk,
pdata_dev->imem_clk_name.clk,
pdata_dev->imem_clk_name.pclk};
/*acquire clocks */
for (i = 1; i < KGSL_MAX_CLKS; i++) {
if (clk_names[i]) {
clk = clk_get(&pdev->dev, clk_names[i]);
if (IS_ERR(clk))
goto clk_err;
pwr->grp_clks[i] = clk;
}
}
/* Make sure we have a source clk for freq setting */
clk = clk_get(&pdev->dev, clk_names[0]);
pwr->grp_clks[0] = (IS_ERR(clk)) ? pwr->grp_clks[1] : clk;
/* put the AXI bus into asynchronous mode with the graphics cores */
if (pdata_pwr->set_grp_async != NULL)
pdata_pwr->set_grp_async();
if (pdata_pwr->num_levels > KGSL_MAX_PWRLEVELS) {
KGSL_PWR_ERR(device, "invalid power level count: %d\n",
pdata_pwr->num_levels);
result = -EINVAL;
goto done;
}
pwr->num_pwrlevels = pdata_pwr->num_levels;
pwr->active_pwrlevel = pdata_pwr->init_level;
for (i = 0; i < pdata_pwr->num_levels; i++) {
// pwr->pwrlevels[i].gpu_freq =
// (pdata_pwr->pwrlevel[i].gpu_freq > 0) ?
// clk_round_rate(pwr->grp_clks[0],
// pdata_pwr->pwrlevel[i].
// gpu_freq) : 0;
pwr->pwrlevels[i].gpu_freq =(pdata_pwr->pwrlevel[i].gpu_freq > 0)?
pdata_pwr->pwrlevel[i].gpu_freq:0;
pwr->pwrlevels[i].bus_freq =
pdata_pwr->pwrlevel[i].bus_freq;
}
/* Do not set_rate for targets in sync with AXI */
if (pwr->pwrlevels[0].gpu_freq > 0)
clk_set_rate(pwr->grp_clks[0], pwr->
pwrlevels[pwr->num_pwrlevels - 1].gpu_freq);
pwr->gpu_reg = regulator_get(NULL, pwr->regulator_name);
if (IS_ERR(pwr->gpu_reg))
pwr->gpu_reg = NULL;
pwr->power_flags = 0;
pwr->nap_allowed = pdata_pwr->nap_allowed;
pwr->interval_timeout = pdata_pwr->idle_timeout;
pwr->ebi1_clk = clk_get(NULL, "ebi1_kgsl_clk");
if (IS_ERR(pwr->ebi1_clk))
pwr->ebi1_clk = NULL;
else
clk_set_rate(pwr->ebi1_clk,
pwr->pwrlevels[pwr->active_pwrlevel].
bus_freq);
/*acquire interrupt */
pwr->interrupt_num =
platform_get_irq_byname(pdev, pwr->irq_name);
if (pwr->interrupt_num <= 0) {
KGSL_PWR_ERR(device, "platform_get_irq_byname failed: %d\n",
pwr->interrupt_num);
result = -EINVAL;
goto done;
}
register_early_suspend(&device->display_off);
return result;
clk_err:
result = PTR_ERR(clk);
KGSL_PWR_ERR(device, "clk_get(%s) failed: %d\n",
clk_names[i], result);
done:
return result;
}
void kgsl_pwrctrl_close(struct kgsl_device *device)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
int i;
KGSL_PWR_INFO(device, "close device %d\n", device->id);
unregister_early_suspend(&device->display_off);
if (pwr->interrupt_num > 0) {
if (pwr->have_irq) {
free_irq(pwr->interrupt_num, NULL);
pwr->have_irq = 0;
}
pwr->interrupt_num = 0;
}
clk_put(pwr->ebi1_clk);
pwr->pcl = 0;
if (pwr->gpu_reg) {
regulator_put(pwr->gpu_reg);
pwr->gpu_reg = NULL;
}
for (i = 1; i < KGSL_MAX_CLKS; i++)
if (pwr->grp_clks[i]) {
clk_put(pwr->grp_clks[i]);
pwr->grp_clks[i] = NULL;
}
pwr->grp_clks[0] = NULL;
pwr->power_flags = 0;
}
void kgsl_idle_check(struct work_struct *work)
{
struct kgsl_device *device = container_of(work, struct kgsl_device,
idle_check_ws);
mutex_lock(&device->mutex);
if (device->state & (KGSL_STATE_ACTIVE | KGSL_STATE_NAP)) {
if (device->requested_state != KGSL_STATE_SLEEP)
kgsl_pwrscale_idle(device);
if (kgsl_pwrctrl_sleep(device) != 0) {
mod_timer(&device->idle_timer,
jiffies +
device->pwrctrl.interval_timeout);
/* If the GPU has been too busy to sleep, make sure *
* that is acurately reflected in the % busy numbers. */
device->pwrctrl.busy.no_nap_cnt++;
if (device->pwrctrl.busy.no_nap_cnt > UPDATE_BUSY) {
kgsl_pwrctrl_busy_time(device, true);
device->pwrctrl.busy.no_nap_cnt = 0;
}
}
} else if (device->state & (KGSL_STATE_HUNG |
KGSL_STATE_DUMP_AND_RECOVER)) {
device->requested_state = KGSL_STATE_NONE;
}
mutex_unlock(&device->mutex);
}
void kgsl_timer(unsigned long data)
{
struct kgsl_device *device = (struct kgsl_device *) data;
KGSL_PWR_INFO(device, "idle timer expired device %d\n", device->id);
if (device->requested_state != KGSL_STATE_SUSPEND) {
device->requested_state = KGSL_STATE_SLEEP;
/* Have work run in a non-interrupt context. */
queue_work(device->work_queue, &device->idle_check_ws);
}
}
void kgsl_pre_hwaccess(struct kgsl_device *device)
{
BUG_ON(!mutex_is_locked(&device->mutex));
if (device->state & (KGSL_STATE_SLEEP | KGSL_STATE_NAP))
kgsl_pwrctrl_wake(device);
}
EXPORT_SYMBOL(kgsl_pre_hwaccess);
void kgsl_check_suspended(struct kgsl_device *device)
{
if (device->requested_state == KGSL_STATE_SUSPEND ||
device->state == KGSL_STATE_SUSPEND) {
mutex_unlock(&device->mutex);
wait_for_completion(&device->hwaccess_gate);
mutex_lock(&device->mutex);
}
if (device->state == KGSL_STATE_DUMP_AND_RECOVER) {
mutex_unlock(&device->mutex);
wait_for_completion(&device->recovery_gate);
mutex_lock(&device->mutex);
}
}
/******************************************************************/
/* Caller must hold the device mutex. */
int kgsl_pwrctrl_sleep(struct kgsl_device *device)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
KGSL_PWR_INFO(device, "sleep device %d\n", device->id);
/* Work through the legal state transitions */
if (device->requested_state == KGSL_STATE_NAP) {
if (device->ftbl->isidle(device))
goto nap;
} else if (device->requested_state == KGSL_STATE_SLEEP) {
if (device->state == KGSL_STATE_NAP ||
device->ftbl->isidle(device))
goto sleep;
}
device->requested_state = KGSL_STATE_NONE;
return -EBUSY;
sleep:
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
kgsl_pwrctrl_axi(device, KGSL_PWRFLAGS_OFF);
if (pwr->pwrlevels[0].gpu_freq > 0)
clk_set_rate(pwr->grp_clks[0],
pwr->pwrlevels[pwr->num_pwrlevels - 1].
gpu_freq);
kgsl_pwrctrl_busy_time(device, false);
pwr->busy.start.tv_sec = 0;
device->pwrctrl.time = 0;
goto clk_off;
nap:
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
clk_off:
kgsl_pwrctrl_clk(device, KGSL_PWRFLAGS_OFF);
device->state = device->requested_state;
device->requested_state = KGSL_STATE_NONE;
wake_unlock(&device->idle_wakelock);
KGSL_PWR_WARN(device, "state -> NAP/SLEEP(%d), device %d\n",
device->state, device->id);
return 0;
}
EXPORT_SYMBOL(kgsl_pwrctrl_sleep);
/******************************************************************/
/* Caller must hold the device mutex. */
void kgsl_pwrctrl_wake(struct kgsl_device *device)
{
if (device->state == KGSL_STATE_SUSPEND)
return;
if (device->state != KGSL_STATE_NAP) {
kgsl_pwrctrl_axi(device, KGSL_PWRFLAGS_ON);
}
/* Turn on the core clocks */
kgsl_pwrctrl_clk(device, KGSL_PWRFLAGS_ON);
/* Enable state before turning on irq */
device->state = KGSL_STATE_ACTIVE;
KGSL_PWR_WARN(device, "state -> ACTIVE, device %d\n", device->id);
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_ON);
/* Re-enable HW access */
mod_timer(&device->idle_timer,
jiffies + device->pwrctrl.interval_timeout);
wake_lock(&device->idle_wakelock);
KGSL_PWR_INFO(device, "wake return for device %d\n", device->id);
}
EXPORT_SYMBOL(kgsl_pwrctrl_wake);
void kgsl_pwrctrl_enable(struct kgsl_device *device)
{
/* Order pwrrail/clk sequence based upon platform */
kgsl_pwrctrl_pwrrail(device, KGSL_PWRFLAGS_ON);
kgsl_pwrctrl_clk(device, KGSL_PWRFLAGS_ON);
kgsl_pwrctrl_axi(device, KGSL_PWRFLAGS_ON);
}
EXPORT_SYMBOL(kgsl_pwrctrl_enable);
void kgsl_pwrctrl_disable(struct kgsl_device *device)
{
/* Order pwrrail/clk sequence based upon platform */
kgsl_pwrctrl_axi(device, KGSL_PWRFLAGS_OFF);
kgsl_pwrctrl_clk(device, KGSL_PWRFLAGS_OFF);
kgsl_pwrctrl_pwrrail(device, KGSL_PWRFLAGS_OFF);
}
EXPORT_SYMBOL(kgsl_pwrctrl_disable);

View File

@ -1,87 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_PWRCTRL_H
#define __KGSL_PWRCTRL_H
#include <mach/internal_power_rail.h>
/*****************************************************************************
** power flags
*****************************************************************************/
#define KGSL_PWRFLAGS_ON 1
#define KGSL_PWRFLAGS_OFF 0
#define KGSL_PWRLEVEL_TURBO 0
#define KGSL_PWRLEVEL_NOMINAL 1
#define KGSL_PWRLEVEL_LOW_OFFSET 2
#define KGSL_MAX_CLKS 5
struct platform_device;
struct kgsl_busy {
struct timeval start;
struct timeval stop;
int on_time;
int time;
int on_time_old;
int time_old;
unsigned int no_nap_cnt;
};
struct kgsl_pwrctrl {
int interrupt_num;
int have_irq;
unsigned int pwr_rail;
struct clk *ebi1_clk;
struct clk *grp_clks[KGSL_MAX_CLKS];
unsigned long power_flags;
struct kgsl_pwrlevel pwrlevels[KGSL_MAX_PWRLEVELS];
unsigned int active_pwrlevel;
int thermal_pwrlevel;
unsigned int num_pwrlevels;
unsigned int interval_timeout;
struct regulator *gpu_reg;
uint32_t pcl;
unsigned int nap_allowed;
const char *regulator_name;
const char *irq_name;
const char *src_clk_name;
s64 time;
struct kgsl_busy busy;
};
void kgsl_pwrctrl_clk(struct kgsl_device *device, int state);
void kgsl_pwrctrl_axi(struct kgsl_device *device, int state);
void kgsl_pwrctrl_pwrrail(struct kgsl_device *device, int state);
void kgsl_pwrctrl_irq(struct kgsl_device *device, int state);
int kgsl_pwrctrl_init(struct kgsl_device *device);
void kgsl_pwrctrl_close(struct kgsl_device *device);
void kgsl_timer(unsigned long data);
void kgsl_idle_check(struct work_struct *work);
void kgsl_pre_hwaccess(struct kgsl_device *device);
void kgsl_check_suspended(struct kgsl_device *device);
int kgsl_pwrctrl_sleep(struct kgsl_device *device);
void kgsl_pwrctrl_wake(struct kgsl_device *device);
void kgsl_pwrctrl_pwrlevel_change(struct kgsl_device *device,
unsigned int level);
int kgsl_pwrctrl_init_sysfs(struct kgsl_device *device);
void kgsl_pwrctrl_uninit_sysfs(struct kgsl_device *device);
void kgsl_pwrctrl_enable(struct kgsl_device *device);
void kgsl_pwrctrl_disable(struct kgsl_device *device);
static inline unsigned long kgsl_get_clkrate(struct clk *clk)
{
return (clk != NULL) ? clk_get_rate(clk) : 0;
}
#endif /* __KGSL_PWRCTRL_H */

View File

@ -1,338 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
#include "kgsl_device.h"
struct kgsl_pwrscale_attribute {
struct attribute attr;
ssize_t (*show)(struct kgsl_device *device, char *buf);
ssize_t (*store)(struct kgsl_device *device, const char *buf,
size_t count);
};
#define to_pwrscale(k) container_of(k, struct kgsl_pwrscale, kobj)
#define pwrscale_to_device(p) container_of(p, struct kgsl_device, pwrscale)
#define to_device(k) container_of(k, struct kgsl_device, pwrscale_kobj)
#define to_pwrscale_attr(a) \
container_of(a, struct kgsl_pwrscale_attribute, attr)
#define to_policy_attr(a) \
container_of(a, struct kgsl_pwrscale_policy_attribute, attr)
#define PWRSCALE_ATTR(_name, _mode, _show, _store) \
struct kgsl_pwrscale_attribute pwrscale_attr_##_name = \
__ATTR(_name, _mode, _show, _store)
/* Master list of available policies */
static struct kgsl_pwrscale_policy *kgsl_pwrscale_policies[] = {
#ifdef CONFIG_MSM_SCM
&kgsl_pwrscale_policy_tz,
#endif
#ifdef CONFIG_MSM_SLEEP_STATS
&kgsl_pwrscale_policy_idlestats,
#endif
NULL
};
static ssize_t pwrscale_policy_store(struct kgsl_device *device,
const char *buf, size_t count)
{
int i;
struct kgsl_pwrscale_policy *policy = NULL;
/* The special keyword none allows the user to detach all
policies */
if (!strncmp("none", buf, 4)) {
kgsl_pwrscale_detach_policy(device);
return count;
}
for (i = 0; kgsl_pwrscale_policies[i]; i++) {
if (!strncmp(kgsl_pwrscale_policies[i]->name, buf,
strnlen(kgsl_pwrscale_policies[i]->name,
PAGE_SIZE))) {
policy = kgsl_pwrscale_policies[i];
break;
}
}
if (policy)
if (kgsl_pwrscale_attach_policy(device, policy))
return -EIO;
return count;
}
static ssize_t pwrscale_policy_show(struct kgsl_device *device, char *buf)
{
int ret;
if (device->pwrscale.policy)
ret = snprintf(buf, PAGE_SIZE, "%s\n",
device->pwrscale.policy->name);
else
ret = snprintf(buf, PAGE_SIZE, "none\n");
return ret;
}
PWRSCALE_ATTR(policy, 0644, pwrscale_policy_show, pwrscale_policy_store);
static ssize_t pwrscale_avail_policies_show(struct kgsl_device *device,
char *buf)
{
int i, ret = 0;
for (i = 0; kgsl_pwrscale_policies[i]; i++) {
ret += snprintf(buf + ret, PAGE_SIZE - ret, "%s ",
kgsl_pwrscale_policies[i]->name);
}
ret += snprintf(buf + ret, PAGE_SIZE - ret, "none\n");
return ret;
}
PWRSCALE_ATTR(avail_policies, 0444, pwrscale_avail_policies_show, NULL);
static struct attribute *pwrscale_attrs[] = {
&pwrscale_attr_policy.attr,
&pwrscale_attr_avail_policies.attr,
NULL
};
static ssize_t policy_sysfs_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
struct kgsl_pwrscale *pwrscale = to_pwrscale(kobj);
struct kgsl_device *device = pwrscale_to_device(pwrscale);
struct kgsl_pwrscale_policy_attribute *pattr = to_policy_attr(attr);
ssize_t ret;
if (pattr->show)
ret = pattr->show(device, pwrscale, buf);
else
ret = -EIO;
return ret;
}
static ssize_t policy_sysfs_store(struct kobject *kobj,
struct attribute *attr,
const char *buf, size_t count)
{
struct kgsl_pwrscale *pwrscale = to_pwrscale(kobj);
struct kgsl_device *device = pwrscale_to_device(pwrscale);
struct kgsl_pwrscale_policy_attribute *pattr = to_policy_attr(attr);
ssize_t ret;
if (pattr->store)
ret = pattr->store(device, pwrscale, buf, count);
else
ret = -EIO;
return ret;
}
static void policy_sysfs_release(struct kobject *kobj)
{
}
static ssize_t pwrscale_sysfs_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
struct kgsl_device *device = to_device(kobj);
struct kgsl_pwrscale_attribute *pattr = to_pwrscale_attr(attr);
ssize_t ret;
if (pattr->show)
ret = pattr->show(device, buf);
else
ret = -EIO;
return ret;
}
static ssize_t pwrscale_sysfs_store(struct kobject *kobj,
struct attribute *attr,
const char *buf, size_t count)
{
struct kgsl_device *device = to_device(kobj);
struct kgsl_pwrscale_attribute *pattr = to_pwrscale_attr(attr);
ssize_t ret;
if (pattr->store)
ret = pattr->store(device, buf, count);
else
ret = -EIO;
return ret;
}
static void pwrscale_sysfs_release(struct kobject *kobj)
{
}
static struct sysfs_ops policy_sysfs_ops = {
.show = policy_sysfs_show,
.store = policy_sysfs_store
};
static struct sysfs_ops pwrscale_sysfs_ops = {
.show = pwrscale_sysfs_show,
.store = pwrscale_sysfs_store
};
static struct kobj_type ktype_pwrscale_policy = {
.sysfs_ops = &policy_sysfs_ops,
.default_attrs = NULL,
.release = policy_sysfs_release
};
static struct kobj_type ktype_pwrscale = {
.sysfs_ops = &pwrscale_sysfs_ops,
.default_attrs = pwrscale_attrs,
.release = pwrscale_sysfs_release
};
void kgsl_pwrscale_sleep(struct kgsl_device *device)
{
if (device->pwrscale.policy && device->pwrscale.policy->sleep)
device->pwrscale.policy->sleep(device, &device->pwrscale);
}
EXPORT_SYMBOL(kgsl_pwrscale_sleep);
void kgsl_pwrscale_wake(struct kgsl_device *device)
{
if (device->pwrscale.policy && device->pwrscale.policy->wake)
device->pwrscale.policy->wake(device, &device->pwrscale);
}
EXPORT_SYMBOL(kgsl_pwrscale_wake);
void kgsl_pwrscale_busy(struct kgsl_device *device)
{
if (device->pwrscale.policy && device->pwrscale.policy->busy)
if (!device->pwrscale.gpu_busy)
device->pwrscale.policy->busy(device,
&device->pwrscale);
device->pwrscale.gpu_busy = 1;
}
void kgsl_pwrscale_idle(struct kgsl_device *device)
{
if (device->pwrscale.policy && device->pwrscale.policy->idle)
device->pwrscale.policy->idle(device, &device->pwrscale);
device->pwrscale.gpu_busy = 0;
}
EXPORT_SYMBOL(kgsl_pwrscale_idle);
int kgsl_pwrscale_policy_add_files(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
struct attribute_group *attr_group)
{
int ret;
ret = kobject_add(&pwrscale->kobj, &device->pwrscale_kobj,
"%s", pwrscale->policy->name);
if (ret)
return ret;
ret = sysfs_create_group(&pwrscale->kobj, attr_group);
if (ret) {
kobject_del(&pwrscale->kobj);
kobject_put(&pwrscale->kobj);
}
return ret;
}
void kgsl_pwrscale_policy_remove_files(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
struct attribute_group *attr_group)
{
sysfs_remove_group(&pwrscale->kobj, attr_group);
kobject_del(&pwrscale->kobj);
kobject_put(&pwrscale->kobj);
}
static void _kgsl_pwrscale_detach_policy(struct kgsl_device *device)
{
if (device->pwrscale.policy != NULL) {
device->pwrscale.policy->close(device, &device->pwrscale);
kgsl_pwrctrl_pwrlevel_change(device,
device->pwrctrl.thermal_pwrlevel);
}
device->pwrscale.policy = NULL;
}
void kgsl_pwrscale_detach_policy(struct kgsl_device *device)
{
mutex_lock(&device->mutex);
_kgsl_pwrscale_detach_policy(device);
mutex_unlock(&device->mutex);
}
EXPORT_SYMBOL(kgsl_pwrscale_detach_policy);
int kgsl_pwrscale_attach_policy(struct kgsl_device *device,
struct kgsl_pwrscale_policy *policy)
{
int ret = 0;
mutex_lock(&device->mutex);
if (device->pwrscale.policy == policy)
goto done;
if (device->pwrscale.policy != NULL)
_kgsl_pwrscale_detach_policy(device);
device->pwrscale.policy = policy;
if (policy) {
ret = device->pwrscale.policy->init(device, &device->pwrscale);
if (ret)
device->pwrscale.policy = NULL;
}
done:
mutex_unlock(&device->mutex);
return ret;
}
EXPORT_SYMBOL(kgsl_pwrscale_attach_policy);
int kgsl_pwrscale_init(struct kgsl_device *device)
{
int ret;
ret = kobject_init_and_add(&device->pwrscale_kobj, &ktype_pwrscale,
&device->dev->kobj, "pwrscale");
if (ret)
return ret;
kobject_init(&device->pwrscale.kobj, &ktype_pwrscale_policy);
return ret;
}
EXPORT_SYMBOL(kgsl_pwrscale_init);
void kgsl_pwrscale_close(struct kgsl_device *device)
{
kobject_put(&device->pwrscale_kobj);
}
EXPORT_SYMBOL(kgsl_pwrscale_close);

View File

@ -1,77 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_PWRSCALE_H
#define __KGSL_PWRSCALE_H
struct kgsl_pwrscale;
struct kgsl_pwrscale_policy {
const char *name;
int (*init)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
void (*close)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
void (*idle)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
void (*busy)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
void (*sleep)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
void (*wake)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale);
};
struct kgsl_pwrscale {
struct kgsl_pwrscale_policy *policy;
struct kobject kobj;
void *priv;
int gpu_busy;
};
struct kgsl_pwrscale_policy_attribute {
struct attribute attr;
ssize_t (*show)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale, char *buf);
ssize_t (*store)(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale, const char *buf,
size_t count);
};
#define PWRSCALE_POLICY_ATTR(_name, _mode, _show, _store) \
struct kgsl_pwrscale_policy_attribute policy_attr_##_name = \
__ATTR(_name, _mode, _show, _store)
extern struct kgsl_pwrscale_policy kgsl_pwrscale_policy_tz;
extern struct kgsl_pwrscale_policy kgsl_pwrscale_policy_idlestats;
int kgsl_pwrscale_init(struct kgsl_device *device);
void kgsl_pwrscale_close(struct kgsl_device *device);
int kgsl_pwrscale_attach_policy(struct kgsl_device *device,
struct kgsl_pwrscale_policy *policy);
void kgsl_pwrscale_detach_policy(struct kgsl_device *device);
void kgsl_pwrscale_idle(struct kgsl_device *device);
void kgsl_pwrscale_busy(struct kgsl_device *device);
void kgsl_pwrscale_sleep(struct kgsl_device *device);
void kgsl_pwrscale_wake(struct kgsl_device *device);
int kgsl_pwrscale_policy_add_files(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
struct attribute_group *attr_group);
void kgsl_pwrscale_policy_remove_files(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
struct attribute_group *attr_group);
#endif

View File

@ -1,221 +0,0 @@
/* Copyright (c) 2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/slab.h>
#include <linux/timer.h>
#include <linux/idle_stats_device.h>
#include <linux/cpufreq.h>
#include <linux/notifier.h>
#include <linux/cpumask.h>
#include <linux/tick.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
#include "kgsl_device.h"
#define MAX_CORES 4
struct _cpu_info {
spinlock_t lock;
struct notifier_block cpu_nb;
u64 start[MAX_CORES];
u64 end[MAX_CORES];
int curr_freq[MAX_CORES];
int max_freq[MAX_CORES];
};
struct idlestats_priv {
char name[32];
struct msm_idle_stats_device idledev;
struct kgsl_device *device;
struct msm_idle_pulse pulse;
struct _cpu_info cpu_info;
};
static int idlestats_cpufreq_notifier(
struct notifier_block *nb,
unsigned long val, void *data)
{
struct _cpu_info *cpu = container_of(nb,
struct _cpu_info, cpu_nb);
struct cpufreq_freqs *freq = data;
if (val != CPUFREQ_POSTCHANGE)
return 0;
spin_lock(&cpu->lock);
if (freq->cpu < num_possible_cpus())
cpu->curr_freq[freq->cpu] = freq->new / 1000;
spin_unlock(&cpu->lock);
return 0;
}
static void idlestats_get_sample(struct msm_idle_stats_device *idledev,
struct msm_idle_pulse *pulse)
{
struct kgsl_power_stats stats;
struct idlestats_priv *priv = container_of(idledev,
struct idlestats_priv, idledev);
struct kgsl_device *device = priv->device;
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
mutex_lock(&device->mutex);
/* If the GPU is asleep, don't wake it up - assume that we
are idle */
if (!(device->state & (KGSL_STATE_SLEEP | KGSL_STATE_NAP))) {
device->ftbl->power_stats(device, &stats);
pulse->busy_start_time = pwr->time - stats.busy_time;
pulse->busy_interval = stats.busy_time;
} else {
pulse->busy_start_time = pwr->time;
pulse->busy_interval = 0;
}
pulse->wait_interval = 0;
mutex_unlock(&device->mutex);
}
static void idlestats_busy(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
struct idlestats_priv *priv = pwrscale->priv;
int i, busy, nr_cpu = 1;
if (priv->pulse.busy_start_time != 0) {
priv->pulse.wait_interval = 0;
/* Calculate the total CPU busy time for this GPU pulse */
for (i = 0; i < num_possible_cpus(); i++) {
spin_lock(&priv->cpu_info.lock);
if (cpu_online(i)) {
priv->cpu_info.end[i] =
(u64)ktime_to_us(ktime_get()) -
get_cpu_idle_time_us(i, NULL);
busy = priv->cpu_info.end[i] -
priv->cpu_info.start[i];
/* Normalize the busy time by frequency */
busy = priv->cpu_info.curr_freq[i] *
(busy / priv->cpu_info.max_freq[i]);
priv->pulse.wait_interval += busy;
nr_cpu++;
}
spin_unlock(&priv->cpu_info.lock);
}
priv->pulse.wait_interval /= nr_cpu;
msm_idle_stats_idle_end(&priv->idledev, &priv->pulse);
}
priv->pulse.busy_start_time = ktime_to_us(ktime_get());
}
static void idlestats_idle(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
int i, nr_cpu;
struct kgsl_power_stats stats;
struct idlestats_priv *priv = pwrscale->priv;
/* This is called from within a mutex protected function, so
no additional locking required */
device->ftbl->power_stats(device, &stats);
/* If total_time is zero, then we don't have
any interesting statistics to store */
if (stats.total_time == 0) {
priv->pulse.busy_start_time = 0;
return;
}
priv->pulse.busy_interval = stats.busy_time;
nr_cpu = num_possible_cpus();
for (i = 0; i < nr_cpu; i++)
if (cpu_online(i))
priv->cpu_info.start[i] =
(u64)ktime_to_us(ktime_get()) -
get_cpu_idle_time_us(i, NULL);
msm_idle_stats_idle_start(&priv->idledev);
}
static void idlestats_sleep(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
struct idlestats_priv *priv = pwrscale->priv;
priv->idledev.stats->event |= MSM_IDLE_STATS_EVENT_IDLE_TIMER_EXPIRED;
}
static int idlestats_init(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
struct idlestats_priv *priv;
struct cpufreq_policy cpu_policy;
int ret, i;
priv = pwrscale->priv = kzalloc(sizeof(struct idlestats_priv),
GFP_KERNEL);
if (pwrscale->priv == NULL)
return -ENOMEM;
snprintf(priv->name, sizeof(priv->name), "idle_stats_%s",
device->name);
priv->device = device;
priv->idledev.name = (const char *) priv->name;
priv->idledev.get_sample = idlestats_get_sample;
spin_lock_init(&priv->cpu_info.lock);
priv->cpu_info.cpu_nb.notifier_call =
idlestats_cpufreq_notifier;
ret = cpufreq_register_notifier(&priv->cpu_info.cpu_nb,
CPUFREQ_TRANSITION_NOTIFIER);
if (ret)
goto err;
for (i = 0; i < num_possible_cpus(); i++) {
cpufreq_frequency_table_cpuinfo(&cpu_policy,
cpufreq_frequency_get_table(i));
priv->cpu_info.max_freq[i] = cpu_policy.max / 1000;
priv->cpu_info.curr_freq[i] = cpu_policy.max / 1000;
}
ret = msm_idle_stats_register_device(&priv->idledev);
err:
if (ret) {
kfree(pwrscale->priv);
pwrscale->priv = NULL;
}
return ret;
}
static void idlestats_close(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
struct idlestats_priv *priv = pwrscale->priv;
if (pwrscale->priv == NULL)
return;
cpufreq_unregister_notifier(&priv->cpu_info.cpu_nb,
CPUFREQ_TRANSITION_NOTIFIER);
msm_idle_stats_deregister_device(&priv->idledev);
kfree(pwrscale->priv);
pwrscale->priv = NULL;
}
struct kgsl_pwrscale_policy kgsl_pwrscale_policy_idlestats = {
.name = "idlestats",
.init = idlestats_init,
.idle = idlestats_idle,
.busy = idlestats_busy,
.sleep = idlestats_sleep,
.close = idlestats_close
};

View File

@ -1,197 +0,0 @@
/* Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <mach/socinfo.h>
#include <mach/scm.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
#include "kgsl_device.h"
#define TZ_GOVERNOR_PERFORMANCE 0
#define TZ_GOVERNOR_ONDEMAND 1
struct tz_priv {
int governor;
unsigned int no_switch_cnt;
unsigned int skip_cnt;
};
#define SWITCH_OFF 200
#define SWITCH_OFF_RESET_TH 40
#define SKIP_COUNTER 500
#define TZ_RESET_ID 0x3
#define TZ_UPDATE_ID 0x4
#ifdef CONFIG_MSM_SCM
/* Trap into the TrustZone, and call funcs there. */
static int __secure_tz_entry(u32 cmd, u32 val)
{
__iowmb();
return scm_call_atomic1(SCM_SVC_IO, cmd, val);
}
#else
static int __secure_tz_entry(u32 cmd, u32 val)
{
return 0;
}
#endif /* CONFIG_MSM_SCM */
static ssize_t tz_governor_show(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
char *buf)
{
struct tz_priv *priv = pwrscale->priv;
int ret;
if (priv->governor == TZ_GOVERNOR_ONDEMAND)
ret = snprintf(buf, 10, "ondemand\n");
else
ret = snprintf(buf, 13, "performance\n");
return ret;
}
static ssize_t tz_governor_store(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
const char *buf, size_t count)
{
char str[20];
struct tz_priv *priv = pwrscale->priv;
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
int ret;
ret = sscanf(buf, "%20s", str);
if (ret != 1)
return -EINVAL;
mutex_lock(&device->mutex);
if (!strncmp(str, "ondemand", 8))
priv->governor = TZ_GOVERNOR_ONDEMAND;
else if (!strncmp(str, "performance", 11))
priv->governor = TZ_GOVERNOR_PERFORMANCE;
if (priv->governor == TZ_GOVERNOR_PERFORMANCE)
kgsl_pwrctrl_pwrlevel_change(device, pwr->thermal_pwrlevel);
mutex_unlock(&device->mutex);
return count;
}
PWRSCALE_POLICY_ATTR(governor, 0644, tz_governor_show, tz_governor_store);
static struct attribute *tz_attrs[] = {
&policy_attr_governor.attr,
NULL
};
static struct attribute_group tz_attr_group = {
.attrs = tz_attrs,
};
static void tz_wake(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
struct tz_priv *priv = pwrscale->priv;
if (device->state != KGSL_STATE_NAP &&
priv->governor == TZ_GOVERNOR_ONDEMAND)
kgsl_pwrctrl_pwrlevel_change(device,
device->pwrctrl.thermal_pwrlevel);
}
static void tz_idle(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
struct tz_priv *priv = pwrscale->priv;
struct kgsl_power_stats stats;
int val;
/* In "performance" mode the clock speed always stays
the same */
if (priv->governor == TZ_GOVERNOR_PERFORMANCE)
return;
device->ftbl->power_stats(device, &stats);
if (stats.total_time == 0)
return;
/* If the GPU has stayed in turbo mode for a while, *
* stop writing out values. */
if (pwr->active_pwrlevel == 0) {
if (priv->no_switch_cnt > SWITCH_OFF) {
priv->skip_cnt++;
if (priv->skip_cnt > SKIP_COUNTER) {
priv->no_switch_cnt -= SWITCH_OFF_RESET_TH;
priv->skip_cnt = 0;
}
return;
}
priv->no_switch_cnt++;
} else {
priv->no_switch_cnt = 0;
}
val = __secure_tz_entry(TZ_UPDATE_ID,
stats.total_time - stats.busy_time);
if (val)
kgsl_pwrctrl_pwrlevel_change(device,
pwr->active_pwrlevel + val);
}
static void tz_sleep(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale)
{
struct tz_priv *priv = pwrscale->priv;
__secure_tz_entry(TZ_RESET_ID, 0);
priv->no_switch_cnt = 0;
}
static int tz_init(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
struct tz_priv *priv;
/* Trustzone is only valid for some SOCs */
if (!(cpu_is_msm8x60() || cpu_is_msm8960() || cpu_is_msm8930()))
return -EINVAL;
priv = pwrscale->priv = kzalloc(sizeof(struct tz_priv), GFP_KERNEL);
if (pwrscale->priv == NULL)
return -ENOMEM;
priv->governor = TZ_GOVERNOR_ONDEMAND;
kgsl_pwrscale_policy_add_files(device, pwrscale, &tz_attr_group);
return 0;
}
static void tz_close(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
kgsl_pwrscale_policy_remove_files(device, pwrscale, &tz_attr_group);
kfree(pwrscale->priv);
pwrscale->priv = NULL;
}
struct kgsl_pwrscale_policy kgsl_pwrscale_policy_tz = {
.name = "trustzone",
.init = tz_init,
.idle = tz_idle,
.sleep = tz_sleep,
.wake = tz_wake,
.close = tz_close
};
EXPORT_SYMBOL(kgsl_pwrscale_policy_tz);

View File

@ -1,611 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/vmalloc.h>
#include <linux/memory_alloc.h>
#include <asm/cacheflush.h>
#include "kgsl.h"
#include "kgsl_sharedmem.h"
#include "kgsl_cffdump.h"
#include "kgsl_device.h"
#include "adreno_ringbuffer.h"
static struct kgsl_process_private *
_get_priv_from_kobj(struct kobject *kobj)
{
struct kgsl_process_private *private;
unsigned long name;
if (!kobj)
return NULL;
if (sscanf(kobj->name, "%ld", &name) != 1)
return NULL;
list_for_each_entry(private, &kgsl_driver.process_list, list) {
if (private->pid == name)
return private;
}
return NULL;
}
/* sharedmem / memory sysfs files */
static ssize_t
process_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct kgsl_process_private *priv;
unsigned int val = 0;
mutex_lock(&kgsl_driver.process_mutex);
priv = _get_priv_from_kobj(kobj);
if (priv == NULL) {
mutex_unlock(&kgsl_driver.process_mutex);
return 0;
}
if (!strncmp(attr->attr.name, "user", 4))
val = priv->stats.user;
if (!strncmp(attr->attr.name, "user_max", 8))
val = priv->stats.user_max;
if (!strncmp(attr->attr.name, "mapped", 6))
val = priv->stats.mapped;
if (!strncmp(attr->attr.name, "mapped_max", 10))
val = priv->stats.mapped_max;
if (!strncmp(attr->attr.name, "flushes", 7))
val = priv->stats.flushes;
mutex_unlock(&kgsl_driver.process_mutex);
return snprintf(buf, PAGE_SIZE, "%u\n", val);
}
#define KGSL_MEMSTAT_ATTR(_name, _show) \
static struct kobj_attribute attr_##_name = \
__ATTR(_name, 0444, _show, NULL)
KGSL_MEMSTAT_ATTR(user, process_show);
KGSL_MEMSTAT_ATTR(user_max, process_show);
KGSL_MEMSTAT_ATTR(mapped, process_show);
KGSL_MEMSTAT_ATTR(mapped_max, process_show);
KGSL_MEMSTAT_ATTR(flushes, process_show);
static struct attribute *process_attrs[] = {
&attr_user.attr,
&attr_user_max.attr,
&attr_mapped.attr,
&attr_mapped_max.attr,
&attr_flushes.attr,
NULL
};
static struct attribute_group process_attr_group = {
.attrs = process_attrs,
};
void
kgsl_process_uninit_sysfs(struct kgsl_process_private *private)
{
/* Remove the sysfs entry */
if (private->kobj) {
sysfs_remove_group(private->kobj, &process_attr_group);
kobject_put(private->kobj);
}
}
void
kgsl_process_init_sysfs(struct kgsl_process_private *private)
{
unsigned char name[16];
/* Add a entry to the sysfs device */
snprintf(name, sizeof(name), "%d", private->pid);
private->kobj = kobject_create_and_add(name, kgsl_driver.prockobj);
/* sysfs failure isn't fatal, just annoying */
if (private->kobj != NULL) {
if (sysfs_create_group(private->kobj, &process_attr_group)) {
kobject_put(private->kobj);
private->kobj = NULL;
}
}
}
static int kgsl_drv_memstat_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
unsigned int val = 0;
if (!strncmp(attr->attr.name, "vmalloc", 7))
val = kgsl_driver.stats.vmalloc;
else if (!strncmp(attr->attr.name, "vmalloc_max", 11))
val = kgsl_driver.stats.vmalloc_max;
else if (!strncmp(attr->attr.name, "coherent", 8))
val = kgsl_driver.stats.coherent;
else if (!strncmp(attr->attr.name, "coherent_max", 12))
val = kgsl_driver.stats.coherent_max;
else if (!strncmp(attr->attr.name, "mapped", 6))
val = kgsl_driver.stats.mapped;
else if (!strncmp(attr->attr.name, "mapped_max", 10))
val = kgsl_driver.stats.mapped_max;
return snprintf(buf, PAGE_SIZE, "%u\n", val);
}
static int kgsl_drv_histogram_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
int len = 0;
int i;
for (i = 0; i < 16; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%d ",
kgsl_driver.stats.histogram[i]);
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
return len;
}
DEVICE_ATTR(vmalloc, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(vmalloc_max, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(coherent, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(coherent_max, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(mapped, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(mapped_max, 0444, kgsl_drv_memstat_show, NULL);
DEVICE_ATTR(histogram, 0444, kgsl_drv_histogram_show, NULL);
static struct device_attribute *drv_attr_list[] = {
&dev_attr_vmalloc,
&dev_attr_vmalloc_max,
&dev_attr_coherent,
&dev_attr_coherent_max,
&dev_attr_mapped,
&dev_attr_mapped_max,
&dev_attr_histogram,
NULL
};
void
kgsl_sharedmem_uninit_sysfs(void)
{
kgsl_remove_device_sysfs_files(&kgsl_driver.virtdev, drv_attr_list);
}
int
kgsl_sharedmem_init_sysfs(void)
{
return kgsl_create_device_sysfs_files(&kgsl_driver.virtdev,
drv_attr_list);
}
#ifdef CONFIG_OUTER_CACHE
static void _outer_cache_range_op(int op, unsigned long addr, size_t size)
{
switch (op) {
case KGSL_CACHE_OP_FLUSH:
outer_flush_range(addr, addr + size);
break;
case KGSL_CACHE_OP_CLEAN:
outer_clean_range(addr, addr + size);
break;
case KGSL_CACHE_OP_INV:
outer_inv_range(addr, addr + size);
break;
}
}
static void outer_cache_range_op_sg(struct scatterlist *sg, int sglen, int op)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, sglen, i) {
unsigned int paddr = sg_phys(s);
_outer_cache_range_op(op, paddr, s->length);
}
}
#else
static void outer_cache_range_op_sg(struct scatterlist *sg, int sglen, int op)
{
}
#endif
static int kgsl_vmalloc_vmfault(struct kgsl_memdesc *memdesc,
struct vm_area_struct *vma,
struct vm_fault *vmf)
{
unsigned long offset, pg;
struct page *page;
offset = (unsigned long) vmf->virtual_address - vma->vm_start;
pg = (unsigned long) memdesc->hostptr + offset;
page = vmalloc_to_page((void *) pg);
if (page == NULL)
return VM_FAULT_SIGBUS;
get_page(page);
vmf->page = page;
return 0;
}
static int kgsl_vmalloc_vmflags(struct kgsl_memdesc *memdesc)
{
return VM_RESERVED | VM_DONTEXPAND;
}
static void kgsl_vmalloc_free(struct kgsl_memdesc *memdesc)
{
kgsl_driver.stats.vmalloc -= memdesc->size;
vfree(memdesc->hostptr);
}
static int kgsl_contiguous_vmflags(struct kgsl_memdesc *memdesc)
{
return VM_RESERVED | VM_IO | VM_PFNMAP | VM_DONTEXPAND;
}
static int kgsl_contiguous_vmfault(struct kgsl_memdesc *memdesc,
struct vm_area_struct *vma,
struct vm_fault *vmf)
{
unsigned long offset, pfn;
int ret;
offset = ((unsigned long) vmf->virtual_address - vma->vm_start) >>
PAGE_SHIFT;
pfn = (memdesc->physaddr >> PAGE_SHIFT) + offset;
ret = vm_insert_pfn(vma, (unsigned long) vmf->virtual_address, pfn);
if (ret == -ENOMEM || ret == -EAGAIN)
return VM_FAULT_OOM;
else if (ret == -EFAULT)
return VM_FAULT_SIGBUS;
return VM_FAULT_NOPAGE;
}
static void kgsl_ebimem_free(struct kgsl_memdesc *memdesc)
{
kgsl_driver.stats.coherent -= memdesc->size;
if (memdesc->hostptr)
iounmap(memdesc->hostptr);
free_contiguous_memory_by_paddr(memdesc->physaddr);
}
static void kgsl_coherent_free(struct kgsl_memdesc *memdesc)
{
kgsl_driver.stats.coherent -= memdesc->size;
dma_free_coherent(NULL, memdesc->size,
memdesc->hostptr, memdesc->physaddr);
}
/* Global - also used by kgsl_drm.c */
struct kgsl_memdesc_ops kgsl_vmalloc_ops = {
.free = kgsl_vmalloc_free,
.vmflags = kgsl_vmalloc_vmflags,
.vmfault = kgsl_vmalloc_vmfault,
};
EXPORT_SYMBOL(kgsl_vmalloc_ops);
static struct kgsl_memdesc_ops kgsl_ebimem_ops = {
.free = kgsl_ebimem_free,
.vmflags = kgsl_contiguous_vmflags,
.vmfault = kgsl_contiguous_vmfault,
};
static struct kgsl_memdesc_ops kgsl_coherent_ops = {
.free = kgsl_coherent_free,
};
void kgsl_cache_range_op(struct kgsl_memdesc *memdesc, int op)
{
void *addr = memdesc->hostptr;
int size = memdesc->size;
switch (op) {
case KGSL_CACHE_OP_FLUSH:
dmac_flush_range(addr, addr + size);
break;
case KGSL_CACHE_OP_CLEAN:
dmac_clean_range(addr, addr + size);
break;
case KGSL_CACHE_OP_INV:
dmac_inv_range(addr, addr + size);
break;
}
outer_cache_range_op_sg(memdesc->sg, memdesc->sglen, op);
}
EXPORT_SYMBOL(kgsl_cache_range_op);
static int
_kgsl_sharedmem_vmalloc(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
void *ptr, size_t size, unsigned int protflags)
{
int order, ret = 0;
int sglen = PAGE_ALIGN(size) / PAGE_SIZE;
int i;
memdesc->size = size;
memdesc->pagetable = pagetable;
memdesc->priv = KGSL_MEMFLAGS_CACHED;
memdesc->ops = &kgsl_vmalloc_ops;
memdesc->hostptr = (void *) ptr;
memdesc->sg = kmalloc(sglen * sizeof(struct scatterlist), GFP_KERNEL);
if (memdesc->sg == NULL) {
ret = -ENOMEM;
goto done;
}
memdesc->sglen = sglen;
sg_init_table(memdesc->sg, sglen);
for (i = 0; i < memdesc->sglen; i++, ptr += PAGE_SIZE) {
struct page *page = vmalloc_to_page(ptr);
if (!page) {
ret = -EINVAL;
goto done;
}
sg_set_page(&memdesc->sg[i], page, PAGE_SIZE, 0);
}
kgsl_cache_range_op(memdesc, KGSL_CACHE_OP_INV);
ret = kgsl_mmu_map(pagetable, memdesc, protflags);
if (ret)
goto done;
KGSL_STATS_ADD(size, kgsl_driver.stats.vmalloc,
kgsl_driver.stats.vmalloc_max);
order = get_order(size);
if (order < 16)
kgsl_driver.stats.histogram[order]++;
done:
if (ret)
kgsl_sharedmem_free(memdesc);
return ret;
}
int
kgsl_sharedmem_vmalloc(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable, size_t size)
{
void *ptr;
BUG_ON(size == 0);
size = ALIGN(size, PAGE_SIZE * 2);
ptr = vmalloc(size);
if (ptr == NULL) {
KGSL_CORE_ERR("vmalloc(%d) failed\n", size);
return -ENOMEM;
}
return _kgsl_sharedmem_vmalloc(memdesc, pagetable, ptr, size,
GSL_PT_PAGE_RV | GSL_PT_PAGE_WV);
}
EXPORT_SYMBOL(kgsl_sharedmem_vmalloc);
int
kgsl_sharedmem_vmalloc_user(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size, int flags)
{
void *ptr;
unsigned int protflags;
BUG_ON(size == 0);
ptr = vmalloc_user(size);
if (ptr == NULL) {
KGSL_CORE_ERR("vmalloc_user(%d) failed: allocated=%d\n",
size, kgsl_driver.stats.vmalloc);
return -ENOMEM;
}
protflags = GSL_PT_PAGE_RV;
if (!(flags & KGSL_MEMFLAGS_GPUREADONLY))
protflags |= GSL_PT_PAGE_WV;
return _kgsl_sharedmem_vmalloc(memdesc, pagetable, ptr, size,
protflags);
}
EXPORT_SYMBOL(kgsl_sharedmem_vmalloc_user);
int
kgsl_sharedmem_alloc_coherent(struct kgsl_memdesc *memdesc, size_t size)
{
int result = 0;
size = ALIGN(size, PAGE_SIZE);
memdesc->size = size;
memdesc->ops = &kgsl_coherent_ops;
memdesc->hostptr = dma_alloc_coherent(NULL, size, &memdesc->physaddr,
GFP_KERNEL);
if (memdesc->hostptr == NULL) {
KGSL_CORE_ERR("dma_alloc_coherent(%d) failed\n", size);
result = -ENOMEM;
goto err;
}
result = memdesc_sg_phys(memdesc, memdesc->physaddr, size);
if (result)
goto err;
/* Record statistics */
KGSL_STATS_ADD(size, kgsl_driver.stats.coherent,
kgsl_driver.stats.coherent_max);
err:
if (result)
kgsl_sharedmem_free(memdesc);
return result;
}
EXPORT_SYMBOL(kgsl_sharedmem_alloc_coherent);
void kgsl_sharedmem_free(struct kgsl_memdesc *memdesc)
{
if (memdesc == NULL || memdesc->size == 0)
return;
if (memdesc->gpuaddr)
kgsl_mmu_unmap(memdesc->pagetable, memdesc);
if (memdesc->ops && memdesc->ops->free)
memdesc->ops->free(memdesc);
kfree(memdesc->sg);
memset(memdesc, 0, sizeof(*memdesc));
}
EXPORT_SYMBOL(kgsl_sharedmem_free);
static int
_kgsl_sharedmem_ebimem(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable, size_t size)
{
int result = 0;
memdesc->size = size;
memdesc->pagetable = pagetable;
memdesc->ops = &kgsl_ebimem_ops;
memdesc->physaddr = allocate_contiguous_ebi_nomap(size, SZ_8K);
if (memdesc->physaddr == 0) {
KGSL_CORE_ERR("allocate_contiguous_ebi_nomap(%d) failed\n",
size);
return -ENOMEM;
}
result = memdesc_sg_phys(memdesc, memdesc->physaddr, size);
if (result)
goto err;
result = kgsl_mmu_map(pagetable, memdesc,
GSL_PT_PAGE_RV | GSL_PT_PAGE_WV);
if (result)
goto err;
KGSL_STATS_ADD(size, kgsl_driver.stats.coherent,
kgsl_driver.stats.coherent_max);
err:
if (result)
kgsl_sharedmem_free(memdesc);
return result;
}
int
kgsl_sharedmem_ebimem_user(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size, int flags)
{
size = ALIGN(size, PAGE_SIZE);
return _kgsl_sharedmem_ebimem(memdesc, pagetable, size);
}
EXPORT_SYMBOL(kgsl_sharedmem_ebimem_user);
int
kgsl_sharedmem_ebimem(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable, size_t size)
{
int result;
size = ALIGN(size, 8192);
result = _kgsl_sharedmem_ebimem(memdesc, pagetable, size);
if (result)
return result;
memdesc->hostptr = ioremap(memdesc->physaddr, size);
if (memdesc->hostptr == NULL) {
KGSL_CORE_ERR("ioremap failed\n");
kgsl_sharedmem_free(memdesc);
return -ENOMEM;
}
return 0;
}
EXPORT_SYMBOL(kgsl_sharedmem_ebimem);
int
kgsl_sharedmem_readl(const struct kgsl_memdesc *memdesc,
uint32_t *dst,
unsigned int offsetbytes)
{
BUG_ON(memdesc == NULL || memdesc->hostptr == NULL || dst == NULL);
WARN_ON(offsetbytes + sizeof(unsigned int) > memdesc->size);
if (offsetbytes + sizeof(unsigned int) > memdesc->size)
return -ERANGE;
*dst = readl_relaxed(memdesc->hostptr + offsetbytes);
return 0;
}
EXPORT_SYMBOL(kgsl_sharedmem_readl);
int
kgsl_sharedmem_writel(const struct kgsl_memdesc *memdesc,
unsigned int offsetbytes,
uint32_t src)
{
BUG_ON(memdesc == NULL || memdesc->hostptr == NULL);
BUG_ON(offsetbytes + sizeof(unsigned int) > memdesc->size);
kgsl_cffdump_setmem(memdesc->physaddr + offsetbytes,
src, sizeof(uint));
writel_relaxed(src, memdesc->hostptr + offsetbytes);
return 0;
}
EXPORT_SYMBOL(kgsl_sharedmem_writel);
int
kgsl_sharedmem_set(const struct kgsl_memdesc *memdesc, unsigned int offsetbytes,
unsigned int value, unsigned int sizebytes)
{
BUG_ON(memdesc == NULL || memdesc->hostptr == NULL);
BUG_ON(offsetbytes + sizebytes > memdesc->size);
kgsl_cffdump_setmem(memdesc->physaddr + offsetbytes, value,
sizebytes);
memset(memdesc->hostptr + offsetbytes, value, sizebytes);
return 0;
}
EXPORT_SYMBOL(kgsl_sharedmem_set);

View File

@ -1,133 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
* Copyright (C) 2011 Sony Ericsson Mobile Communications AB.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __KGSL_SHAREDMEM_H
#define __KGSL_SHAREDMEM_H
#include <linux/slab.h>
#include <linux/dma-mapping.h>
/*
* Convert a page to a physical address
*/
#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
struct kgsl_device;
struct kgsl_process_private;
#define KGSL_CACHE_OP_INV 0x01
#define KGSL_CACHE_OP_FLUSH 0x02
#define KGSL_CACHE_OP_CLEAN 0x03
/** Set if the memdesc describes cached memory */
#define KGSL_MEMFLAGS_CACHED 0x00000001
struct kgsl_memdesc_ops {
int (*vmflags)(struct kgsl_memdesc *);
int (*vmfault)(struct kgsl_memdesc *, struct vm_area_struct *,
struct vm_fault *);
void (*free)(struct kgsl_memdesc *memdesc);
};
extern struct kgsl_memdesc_ops kgsl_vmalloc_ops;
int kgsl_sharedmem_vmalloc(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable, size_t size);
int kgsl_sharedmem_vmalloc_user(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size, int flags);
int kgsl_sharedmem_alloc_coherent(struct kgsl_memdesc *memdesc, size_t size);
int kgsl_sharedmem_ebimem_user(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size, int flags);
int kgsl_sharedmem_ebimem(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size);
void kgsl_sharedmem_free(struct kgsl_memdesc *memdesc);
int kgsl_sharedmem_readl(const struct kgsl_memdesc *memdesc,
uint32_t *dst,
unsigned int offsetbytes);
int kgsl_sharedmem_writel(const struct kgsl_memdesc *memdesc,
unsigned int offsetbytes,
uint32_t src);
int kgsl_sharedmem_set(const struct kgsl_memdesc *memdesc,
unsigned int offsetbytes, unsigned int value,
unsigned int sizebytes);
void kgsl_cache_range_op(struct kgsl_memdesc *memdesc, int op);
void kgsl_process_init_sysfs(struct kgsl_process_private *private);
void kgsl_process_uninit_sysfs(struct kgsl_process_private *private);
int kgsl_sharedmem_init_sysfs(void);
void kgsl_sharedmem_uninit_sysfs(void);
static inline int
memdesc_sg_phys(struct kgsl_memdesc *memdesc,
unsigned int physaddr, unsigned int size)
{
struct page *page = phys_to_page(physaddr);
memdesc->sg = kmalloc(sizeof(struct scatterlist) * 1, GFP_KERNEL);
if (memdesc->sg == NULL)
return -ENOMEM;
memdesc->sglen = 1;
sg_init_table(memdesc->sg, 1);
sg_set_page(&memdesc->sg[0], page, size, 0);
return 0;
}
static inline int
kgsl_allocate(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable, size_t size)
{
#ifdef CONFIG_MSM_KGSL_MMU
return kgsl_sharedmem_vmalloc(memdesc, pagetable, size);
#else
return kgsl_sharedmem_ebimem(memdesc, pagetable, size);
#endif
}
static inline int
kgsl_allocate_user(struct kgsl_memdesc *memdesc,
struct kgsl_pagetable *pagetable,
size_t size, unsigned int flags)
{
#ifdef CONFIG_MSM_KGSL_MMU
return kgsl_sharedmem_vmalloc_user(memdesc, pagetable, size, flags);
#else
return kgsl_sharedmem_ebimem_user(memdesc, pagetable, size, flags);
#endif
}
static inline int
kgsl_allocate_contiguous(struct kgsl_memdesc *memdesc, size_t size)
{
int ret = kgsl_sharedmem_alloc_coherent(memdesc, size);
#ifndef CONFIG_MSM_KGSL_MMU
if (!ret)
memdesc->gpuaddr = memdesc->physaddr;
#endif
return ret;
}
#endif /* __KGSL_SHAREDMEM_H */

View File

@ -1,949 +0,0 @@
/* Copyright (c) 2002,2007-2011, Code Aurora Forum. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/uaccess.h>
#include "kgsl.h"
#include "kgsl_cffdump.h"
#include "kgsl_sharedmem.h"
#include "z180.h"
#include "z180_reg.h"
#define DRIVER_VERSION_MAJOR 3
#define DRIVER_VERSION_MINOR 1
#define Z180_DEVICE(device) \
KGSL_CONTAINER_OF(device, struct z180_device, dev)
#define GSL_VGC_INT_MASK \
(REG_VGC_IRQSTATUS__MH_MASK | \
REG_VGC_IRQSTATUS__G2D_MASK | \
REG_VGC_IRQSTATUS__FIFO_MASK)
#define VGV3_NEXTCMD_JUMP 0x01
#define VGV3_NEXTCMD_NEXTCMD_FSHIFT 12
#define VGV3_NEXTCMD_NEXTCMD_FMASK 0x7
#define VGV3_CONTROL_MARKADD_FSHIFT 0
#define VGV3_CONTROL_MARKADD_FMASK 0xfff
#define Z180_PACKET_SIZE 15
#define Z180_MARKER_SIZE 10
#define Z180_CALL_CMD 0x1000
#define Z180_MARKER_CMD 0x8000
#define Z180_STREAM_END_CMD 0x9000
#define Z180_STREAM_PACKET 0x7C000176
#define Z180_STREAM_PACKET_CALL 0x7C000275
#define Z180_PACKET_COUNT 8
#define Z180_RB_SIZE (Z180_PACKET_SIZE*Z180_PACKET_COUNT \
*sizeof(uint32_t))
#define NUMTEXUNITS 4
#define TEXUNITREGCOUNT 25
#define VG_REGCOUNT 0x39
#define PACKETSIZE_BEGIN 3
#define PACKETSIZE_G2DCOLOR 2
#define PACKETSIZE_TEXUNIT (TEXUNITREGCOUNT * 2)
#define PACKETSIZE_REG (VG_REGCOUNT * 2)
#define PACKETSIZE_STATE (PACKETSIZE_TEXUNIT * NUMTEXUNITS + \
PACKETSIZE_REG + PACKETSIZE_BEGIN + \
PACKETSIZE_G2DCOLOR)
#define PACKETSIZE_STATESTREAM (ALIGN((PACKETSIZE_STATE * \
sizeof(unsigned int)), 32) / \
sizeof(unsigned int))
#define Z180_INVALID_CONTEXT UINT_MAX
/* z180 MH arbiter config*/
#define Z180_CFG_MHARB \
(0x10 \
| (0 << MH_ARBITER_CONFIG__SAME_PAGE_GRANULARITY__SHIFT) \
| (1 << MH_ARBITER_CONFIG__L1_ARB_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__L1_ARB_HOLD_ENABLE__SHIFT) \
| (0 << MH_ARBITER_CONFIG__L2_ARB_CONTROL__SHIFT) \
| (1 << MH_ARBITER_CONFIG__PAGE_SIZE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__TC_REORDER_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__TC_ARB_HOLD_ENABLE__SHIFT) \
| (0 << MH_ARBITER_CONFIG__IN_FLIGHT_LIMIT_ENABLE__SHIFT) \
| (0x8 << MH_ARBITER_CONFIG__IN_FLIGHT_LIMIT__SHIFT) \
| (1 << MH_ARBITER_CONFIG__CP_CLNT_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__VGT_CLNT_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__TC_CLNT_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__RB_CLNT_ENABLE__SHIFT) \
| (1 << MH_ARBITER_CONFIG__PA_CLNT_ENABLE__SHIFT))
#define Z180_TIMESTAMP_EPSILON 20000
#define Z180_IDLE_COUNT_MAX 1000000
enum z180_cmdwindow_type {
Z180_CMDWINDOW_2D = 0x00000000,
Z180_CMDWINDOW_MMU = 0x00000002,
};
#define Z180_CMDWINDOW_TARGET_MASK 0x000000FF
#define Z180_CMDWINDOW_ADDR_MASK 0x00FFFF00
#define Z180_CMDWINDOW_TARGET_SHIFT 0
#define Z180_CMDWINDOW_ADDR_SHIFT 8
static int z180_start(struct kgsl_device *device, unsigned int init_ram);
static int z180_stop(struct kgsl_device *device);
static int z180_wait(struct kgsl_device *device,
unsigned int timestamp,
unsigned int msecs);
static void z180_regread(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int *value);
static void z180_regwrite(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int value);
static void z180_cmdwindow_write(struct kgsl_device *device,
unsigned int addr,
unsigned int data);
#define Z180_MMU_CONFIG \
(0x01 \
| (MMU_CONFIG << MH_MMU_CONFIG__RB_W_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_W_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_R0_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_R1_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_R2_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_R3_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__CP_R4_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__VGT_R0_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__VGT_R1_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__TC_R_CLNT_BEHAVIOR__SHIFT) \
| (MMU_CONFIG << MH_MMU_CONFIG__PA_W_CLNT_BEHAVIOR__SHIFT))
static const struct kgsl_functable z180_functable;
static struct z180_device device_2d0 = {
.dev = {
.name = DEVICE_2D0_NAME,
.id = KGSL_DEVICE_2D0,
.ver_major = DRIVER_VERSION_MAJOR,
.ver_minor = DRIVER_VERSION_MINOR,
.mh = {
.mharb = Z180_CFG_MHARB,
.mh_intf_cfg1 = 0x00032f07,
.mh_intf_cfg2 = 0x004b274f,
/* turn off memory protection unit by setting
acceptable physical address range to include
all pages. */
.mpu_base = 0x00000000,
.mpu_range = 0xFFFFF000,
},
.mmu = {
.config = Z180_MMU_CONFIG,
},
.pwrctrl = {
.pwr_rail = PWR_RAIL_GRP_2D_CLK,
.regulator_name = "fs_gfx2d0",
.irq_name = KGSL_2D0_IRQ,
},
.mutex = __MUTEX_INITIALIZER(device_2d0.dev.mutex),
.state = KGSL_STATE_INIT,
.active_cnt = 0,
.iomemname = KGSL_2D0_REG_MEMORY,
.ftbl = &z180_functable,
#ifdef CONFIG_HAS_EARLYSUSPEND
.display_off = {
.level = EARLY_SUSPEND_LEVEL_STOP_DRAWING,
.suspend = kgsl_early_suspend_driver,
.resume = kgsl_late_resume_driver,
},
#endif
},
};
static struct z180_device device_2d1 = {
.dev = {
.name = DEVICE_2D1_NAME,
.id = KGSL_DEVICE_2D1,
.ver_major = DRIVER_VERSION_MAJOR,
.ver_minor = DRIVER_VERSION_MINOR,
.mh = {
.mharb = Z180_CFG_MHARB,
.mh_intf_cfg1 = 0x00032f07,
.mh_intf_cfg2 = 0x004b274f,
/* turn off memory protection unit by setting
acceptable physical address range to include
all pages. */
.mpu_base = 0x00000000,
.mpu_range = 0xFFFFF000,
},
.mmu = {
.config = Z180_MMU_CONFIG,
},
.pwrctrl = {
.pwr_rail = PWR_RAIL_GRP_2D_CLK,
.regulator_name = "fs_gfx2d1",
.irq_name = KGSL_2D1_IRQ,
},
.mutex = __MUTEX_INITIALIZER(device_2d1.dev.mutex),
.state = KGSL_STATE_INIT,
.active_cnt = 0,
.iomemname = KGSL_2D1_REG_MEMORY,
.ftbl = &z180_functable,
.display_off = {
#ifdef CONFIG_HAS_EARLYSUSPEND
.level = EARLY_SUSPEND_LEVEL_STOP_DRAWING,
.suspend = kgsl_early_suspend_driver,
.resume = kgsl_late_resume_driver,
#endif
},
},
};
static irqreturn_t z180_isr(int irq, void *data)
{
irqreturn_t result = IRQ_NONE;
unsigned int status;
struct kgsl_device *device = (struct kgsl_device *) data;
struct z180_device *z180_dev = Z180_DEVICE(device);
z180_regread(device, ADDR_VGC_IRQSTATUS >> 2, &status);
if (status & GSL_VGC_INT_MASK) {
z180_regwrite(device,
ADDR_VGC_IRQSTATUS >> 2, status & GSL_VGC_INT_MASK);
result = IRQ_HANDLED;
if (status & REG_VGC_IRQSTATUS__FIFO_MASK)
KGSL_DRV_ERR(device, "z180 fifo interrupt\n");
if (status & REG_VGC_IRQSTATUS__MH_MASK)
kgsl_mh_intrcallback(device);
if (status & REG_VGC_IRQSTATUS__G2D_MASK) {
int count;
z180_regread(device,
ADDR_VGC_IRQ_ACTIVE_CNT >> 2,
&count);
count >>= 8;
count &= 255;
z180_dev->timestamp += count;
queue_work(device->work_queue, &device->ts_expired_ws);
wake_up_interruptible(&device->wait_queue);
atomic_notifier_call_chain(
&(device->ts_notifier_list),
device->id, NULL);
}
}
if ((device->pwrctrl.nap_allowed == true) &&
(device->requested_state == KGSL_STATE_NONE)) {
device->requested_state = KGSL_STATE_NAP;
queue_work(device->work_queue, &device->idle_check_ws);
}
mod_timer(&device->idle_timer,
jiffies + device->pwrctrl.interval_timeout);
return result;
}
static void z180_cleanup_pt(struct kgsl_device *device,
struct kgsl_pagetable *pagetable)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
kgsl_mmu_unmap(pagetable, &device->mmu.setstate_memory);
kgsl_mmu_unmap(pagetable, &device->memstore);
kgsl_mmu_unmap(pagetable, &z180_dev->ringbuffer.cmdbufdesc);
}
static int z180_setup_pt(struct kgsl_device *device,
struct kgsl_pagetable *pagetable)
{
int result = 0;
struct z180_device *z180_dev = Z180_DEVICE(device);
result = kgsl_mmu_map_global(pagetable, &device->mmu.setstate_memory,
GSL_PT_PAGE_RV | GSL_PT_PAGE_WV);
if (result)
goto error;
result = kgsl_mmu_map_global(pagetable, &device->memstore,
GSL_PT_PAGE_RV | GSL_PT_PAGE_WV);
if (result)
goto error_unmap_dummy;
result = kgsl_mmu_map_global(pagetable,
&z180_dev->ringbuffer.cmdbufdesc,
GSL_PT_PAGE_RV);
if (result)
goto error_unmap_memstore;
return result;
error_unmap_dummy:
kgsl_mmu_unmap(pagetable, &device->mmu.setstate_memory);
error_unmap_memstore:
kgsl_mmu_unmap(pagetable, &device->memstore);
error:
return result;
}
static inline unsigned int rb_offset(unsigned int index)
{
return index*sizeof(unsigned int)*(Z180_PACKET_SIZE);
}
static void addmarker(struct z180_ringbuffer *rb, unsigned int index)
{
char *ptr = (char *)(rb->cmdbufdesc.hostptr);
unsigned int *p = (unsigned int *)(ptr + rb_offset(index));
*p++ = Z180_STREAM_PACKET;
*p++ = (Z180_MARKER_CMD | 5);
*p++ = ADDR_VGV3_LAST << 24;
*p++ = ADDR_VGV3_LAST << 24;
*p++ = ADDR_VGV3_LAST << 24;
*p++ = Z180_STREAM_PACKET;
*p++ = 5;
*p++ = ADDR_VGV3_LAST << 24;
*p++ = ADDR_VGV3_LAST << 24;
*p++ = ADDR_VGV3_LAST << 24;
}
static void addcmd(struct z180_ringbuffer *rb, unsigned int index,
unsigned int cmd, unsigned int nextcnt)
{
char * ptr = (char *)(rb->cmdbufdesc.hostptr);
unsigned int *p = (unsigned int *)(ptr + (rb_offset(index)
+ (Z180_MARKER_SIZE * sizeof(unsigned int))));
*p++ = Z180_STREAM_PACKET_CALL;
*p++ = cmd;
*p++ = Z180_CALL_CMD | nextcnt;
*p++ = ADDR_VGV3_LAST << 24;
*p++ = ADDR_VGV3_LAST << 24;
}
static void z180_cmdstream_start(struct kgsl_device *device)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
unsigned int cmd = VGV3_NEXTCMD_JUMP << VGV3_NEXTCMD_NEXTCMD_FSHIFT;
z180_dev->timestamp = 0;
z180_dev->current_timestamp = 0;
addmarker(&z180_dev->ringbuffer, 0);
z180_cmdwindow_write(device, ADDR_VGV3_MODE, 4);
z180_cmdwindow_write(device, ADDR_VGV3_NEXTADDR,
z180_dev->ringbuffer.cmdbufdesc.gpuaddr);
z180_cmdwindow_write(device, ADDR_VGV3_NEXTCMD, cmd | 5);
z180_cmdwindow_write(device, ADDR_VGV3_WRITEADDR,
device->memstore.gpuaddr);
cmd = (int)(((1) & VGV3_CONTROL_MARKADD_FMASK)
<< VGV3_CONTROL_MARKADD_FSHIFT);
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, cmd);
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, 0);
}
static int room_in_rb(struct z180_device *device)
{
int ts_diff;
ts_diff = device->current_timestamp - device->timestamp;
return ts_diff < Z180_PACKET_COUNT;
}
static int z180_idle(struct kgsl_device *device, unsigned int timeout)
{
int status = 0;
struct z180_device *z180_dev = Z180_DEVICE(device);
if (timestamp_cmp(z180_dev->current_timestamp,
z180_dev->timestamp) > 0)
status = z180_wait(device, z180_dev->current_timestamp,
timeout);
if (status)
KGSL_DRV_ERR(device, "z180_waittimestamp() timed out\n");
return status;
}
int
z180_cmdstream_issueibcmds(struct kgsl_device_private *dev_priv,
struct kgsl_context *context,
struct kgsl_ibdesc *ibdesc,
unsigned int numibs,
uint32_t *timestamp,
unsigned int ctrl)
{
long result = 0;
unsigned int ofs = PACKETSIZE_STATESTREAM * sizeof(unsigned int);
unsigned int cnt = 5;
unsigned int nextaddr = 0;
unsigned int index = 0;
unsigned int nextindex;
unsigned int nextcnt = Z180_STREAM_END_CMD | 5;
struct kgsl_memdesc tmp = {0};
unsigned int cmd;
struct kgsl_device *device = dev_priv->device;
struct kgsl_pagetable *pagetable = dev_priv->process_priv->pagetable;
struct z180_device *z180_dev = Z180_DEVICE(device);
unsigned int sizedwords;
if (device->state & KGSL_STATE_HUNG) {
result = -EINVAL;
goto error;
}
if (numibs != 1) {
KGSL_DRV_ERR(device, "Invalid number of ibs: %d\n", numibs);
result = -EINVAL;
goto error;
}
cmd = ibdesc[0].gpuaddr;
sizedwords = ibdesc[0].sizedwords;
tmp.hostptr = (void *)*timestamp;
KGSL_CMD_INFO(device, "ctxt %d ibaddr 0x%08x sizedwords %d\n",
context->id, cmd, sizedwords);
/* context switch */
if ((context->id != (int)z180_dev->ringbuffer.prevctx) ||
(ctrl & KGSL_CONTEXT_CTX_SWITCH)) {
KGSL_CMD_INFO(device, "context switch %d -> %d\n",
context->id, z180_dev->ringbuffer.prevctx);
kgsl_mmu_setstate(device, pagetable);
cnt = PACKETSIZE_STATESTREAM;
ofs = 0;
}
kgsl_setstate(device, kgsl_mmu_pt_get_flags(device->mmu.hwpagetable,
device->id));
result = wait_event_interruptible_timeout(device->wait_queue,
room_in_rb(z180_dev),
msecs_to_jiffies(KGSL_TIMEOUT_DEFAULT));
if (result < 0) {
KGSL_CMD_ERR(device, "wait_event_interruptible_timeout "
"failed: %ld\n", result);
goto error;
}
result = 0;
index = z180_dev->current_timestamp % Z180_PACKET_COUNT;
z180_dev->current_timestamp++;
nextindex = z180_dev->current_timestamp % Z180_PACKET_COUNT;
*timestamp = z180_dev->current_timestamp;
z180_dev->ringbuffer.prevctx = context->id;
addcmd(&z180_dev->ringbuffer, index, cmd + ofs, cnt);
/* Make sure the next ringbuffer entry has a marker */
addmarker(&z180_dev->ringbuffer, nextindex);
nextaddr = z180_dev->ringbuffer.cmdbufdesc.gpuaddr
+ rb_offset(nextindex);
tmp.hostptr = (void *)(tmp.hostptr +
(sizedwords * sizeof(unsigned int)));
tmp.size = 12;
kgsl_sharedmem_writel(&tmp, 4, nextaddr);
kgsl_sharedmem_writel(&tmp, 8, nextcnt);
/* sync memory before activating the hardware for the new command*/
mb();
cmd = (int)(((2) & VGV3_CONTROL_MARKADD_FMASK)
<< VGV3_CONTROL_MARKADD_FSHIFT);
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, cmd);
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, 0);
error:
return (int)result;
}
static int z180_ringbuffer_init(struct kgsl_device *device)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
memset(&z180_dev->ringbuffer, 0, sizeof(struct z180_ringbuffer));
z180_dev->ringbuffer.prevctx = Z180_INVALID_CONTEXT;
return kgsl_allocate_contiguous(&z180_dev->ringbuffer.cmdbufdesc,
Z180_RB_SIZE);
}
static void z180_ringbuffer_close(struct kgsl_device *device)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
kgsl_sharedmem_free(&z180_dev->ringbuffer.cmdbufdesc);
memset(&z180_dev->ringbuffer, 0, sizeof(struct z180_ringbuffer));
}
static int __devinit z180_probe(struct platform_device *pdev)
{
int status = -EINVAL;
struct kgsl_device *device = NULL;
struct z180_device *z180_dev;
device = (struct kgsl_device *)pdev->id_entry->driver_data;
device->parentdev = &pdev->dev;
z180_dev = Z180_DEVICE(device);
spin_lock_init(&z180_dev->cmdwin_lock);
status = z180_ringbuffer_init(device);
if (status != 0)
goto error;
status = kgsl_device_platform_probe(device, z180_isr);
if (status)
goto error_close_ringbuffer;
kgsl_pwrscale_init(device);
return status;
error_close_ringbuffer:
z180_ringbuffer_close(device);
error:
device->parentdev = NULL;
return status;
}
static int __devexit z180_remove(struct platform_device *pdev)
{
struct kgsl_device *device = NULL;
device = (struct kgsl_device *)pdev->id_entry->driver_data;
kgsl_pwrscale_close(device);
kgsl_device_platform_remove(device);
z180_ringbuffer_close(device);
return 0;
}
static int z180_start(struct kgsl_device *device, unsigned int init_ram)
{
int status = 0;
device->state = KGSL_STATE_INIT;
device->requested_state = KGSL_STATE_NONE;
KGSL_PWR_WARN(device, "state -> INIT, device %d\n", device->id);
kgsl_pwrctrl_enable(device);
/* Set interrupts to 0 to ensure a good state */
z180_regwrite(device, (ADDR_VGC_IRQENABLE >> 2), 0x0);
kgsl_mh_start(device);
status = kgsl_mmu_start(device);
if (status)
goto error_clk_off;
z180_cmdstream_start(device);
mod_timer(&device->idle_timer, jiffies + FIRST_TIMEOUT);
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_ON);
return 0;
error_clk_off:
z180_regwrite(device, (ADDR_VGC_IRQENABLE >> 2), 0);
kgsl_pwrctrl_disable(device);
return status;
}
static int z180_stop(struct kgsl_device *device)
{
z180_idle(device, KGSL_TIMEOUT_DEFAULT);
del_timer_sync(&device->idle_timer);
kgsl_mmu_stop(device);
/* Disable the clocks before the power rail. */
kgsl_pwrctrl_irq(device, KGSL_PWRFLAGS_OFF);
kgsl_pwrctrl_disable(device);
return 0;
}
static int z180_getproperty(struct kgsl_device *device,
enum kgsl_property_type type,
void *value,
unsigned int sizebytes)
{
int status = -EINVAL;
switch (type) {
case KGSL_PROP_DEVICE_INFO:
{
struct kgsl_devinfo devinfo;
if (sizebytes != sizeof(devinfo)) {
status = -EINVAL;
break;
}
memset(&devinfo, 0, sizeof(devinfo));
devinfo.device_id = device->id+1;
devinfo.chip_id = 0;
devinfo.mmu_enabled = kgsl_mmu_enabled();
if (copy_to_user(value, &devinfo, sizeof(devinfo)) !=
0) {
status = -EFAULT;
break;
}
status = 0;
}
break;
case KGSL_PROP_MMU_ENABLE:
{
int mmu_prop = kgsl_mmu_enabled();
if (sizebytes != sizeof(int)) {
status = -EINVAL;
break;
}
if (copy_to_user(value, &mmu_prop, sizeof(mmu_prop))) {
status = -EFAULT;
break;
}
status = 0;
}
break;
default:
KGSL_DRV_ERR(device, "invalid property: %d\n", type);
status = -EINVAL;
}
return status;
}
static unsigned int z180_isidle(struct kgsl_device *device)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
return (timestamp_cmp(z180_dev->timestamp,
z180_dev->current_timestamp) == 0) ? true : false;
}
static int z180_suspend_context(struct kgsl_device *device)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
z180_dev->ringbuffer.prevctx = Z180_INVALID_CONTEXT;
return 0;
}
/* Not all Z180 registers are directly accessible.
* The _z180_(read|write)_simple functions below handle the ones that are.
*/
static void _z180_regread_simple(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int *value)
{
unsigned int *reg;
BUG_ON(offsetwords * sizeof(uint32_t) >= device->regspace.sizebytes);
reg = (unsigned int *)(device->regspace.mmio_virt_base
+ (offsetwords << 2));
/*ensure this read finishes before the next one.
* i.e. act like normal readl() */
*value = __raw_readl(reg);
rmb();
}
static void _z180_regwrite_simple(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int value)
{
unsigned int *reg;
BUG_ON(offsetwords*sizeof(uint32_t) >= device->regspace.sizebytes);
reg = (unsigned int *)(device->regspace.mmio_virt_base
+ (offsetwords << 2));
kgsl_cffdump_regwrite(device->id, offsetwords << 2, value);
/*ensure previous writes post before this one,
* i.e. act like normal writel() */
wmb();
__raw_writel(value, reg);
}
/* The MH registers must be accessed through via a 2 step write, (read|write)
* process. These registers may be accessed from interrupt context during
* the handling of MH or MMU error interrupts. Therefore a spin lock is used
* to ensure that the 2 step sequence is not interrupted.
*/
static void _z180_regread_mmu(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int *value)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
unsigned long flags;
spin_lock_irqsave(&z180_dev->cmdwin_lock, flags);
_z180_regwrite_simple(device, (ADDR_VGC_MH_READ_ADDR >> 2),
offsetwords);
_z180_regread_simple(device, (ADDR_VGC_MH_DATA_ADDR >> 2), value);
spin_unlock_irqrestore(&z180_dev->cmdwin_lock, flags);
}
static void _z180_regwrite_mmu(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int value)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
unsigned int cmdwinaddr;
unsigned long flags;
cmdwinaddr = ((Z180_CMDWINDOW_MMU << Z180_CMDWINDOW_TARGET_SHIFT) &
Z180_CMDWINDOW_TARGET_MASK);
cmdwinaddr |= ((offsetwords << Z180_CMDWINDOW_ADDR_SHIFT) &
Z180_CMDWINDOW_ADDR_MASK);
spin_lock_irqsave(&z180_dev->cmdwin_lock, flags);
_z180_regwrite_simple(device, ADDR_VGC_MMUCOMMANDSTREAM >> 2,
cmdwinaddr);
_z180_regwrite_simple(device, ADDR_VGC_MMUCOMMANDSTREAM >> 2, value);
spin_unlock_irqrestore(&z180_dev->cmdwin_lock, flags);
}
/* the rest of the code doesn't want to think about if it is writing mmu
* registers or normal registers so handle it here
*/
static void z180_regread(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int *value)
{
if (!in_interrupt())
kgsl_pre_hwaccess(device);
if ((offsetwords >= MH_ARBITER_CONFIG &&
offsetwords <= MH_AXI_HALT_CONTROL) ||
(offsetwords >= MH_MMU_CONFIG &&
offsetwords <= MH_MMU_MPU_END)) {
_z180_regread_mmu(device, offsetwords, value);
} else {
_z180_regread_simple(device, offsetwords, value);
}
}
static void z180_regwrite(struct kgsl_device *device,
unsigned int offsetwords,
unsigned int value)
{
if (!in_interrupt())
kgsl_pre_hwaccess(device);
if ((offsetwords >= MH_ARBITER_CONFIG &&
offsetwords <= MH_CLNT_INTF_CTRL_CONFIG2) ||
(offsetwords >= MH_MMU_CONFIG &&
offsetwords <= MH_MMU_MPU_END)) {
_z180_regwrite_mmu(device, offsetwords, value);
} else {
_z180_regwrite_simple(device, offsetwords, value);
}
}
static void z180_cmdwindow_write(struct kgsl_device *device,
unsigned int addr, unsigned int data)
{
unsigned int cmdwinaddr;
cmdwinaddr = ((Z180_CMDWINDOW_2D << Z180_CMDWINDOW_TARGET_SHIFT) &
Z180_CMDWINDOW_TARGET_MASK);
cmdwinaddr |= ((addr << Z180_CMDWINDOW_ADDR_SHIFT) &
Z180_CMDWINDOW_ADDR_MASK);
z180_regwrite(device, ADDR_VGC_COMMANDSTREAM >> 2, cmdwinaddr);
z180_regwrite(device, ADDR_VGC_COMMANDSTREAM >> 2, data);
}
static unsigned int z180_readtimestamp(struct kgsl_device *device,
enum kgsl_timestamp_type type)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
/* get current EOP timestamp */
return z180_dev->timestamp;
}
static int z180_waittimestamp(struct kgsl_device *device,
unsigned int timestamp,
unsigned int msecs)
{
int status = -EINVAL;
/* Don't wait forever, set a max (10 sec) value for now */
if (msecs == -1)
msecs = 10 * MSEC_PER_SEC;
mutex_unlock(&device->mutex);
status = z180_wait(device, timestamp, msecs);
mutex_lock(&device->mutex);
return status;
}
static int z180_wait(struct kgsl_device *device,
unsigned int timestamp,
unsigned int msecs)
{
int status = -EINVAL;
long timeout = 0;
timeout = wait_io_event_interruptible_timeout(
device->wait_queue,
kgsl_check_timestamp(device, timestamp),
msecs_to_jiffies(msecs));
if (timeout > 0)
status = 0;
else if (timeout == 0) {
status = -ETIMEDOUT;
device->state = KGSL_STATE_HUNG;
KGSL_PWR_WARN(device, "state -> HUNG, device %d\n", device->id);
} else
status = timeout;
return status;
}
static void
z180_drawctxt_destroy(struct kgsl_device *device,
struct kgsl_context *context)
{
struct z180_device *z180_dev = Z180_DEVICE(device);
z180_idle(device, KGSL_TIMEOUT_DEFAULT);
if (z180_dev->ringbuffer.prevctx == context->id) {
z180_dev->ringbuffer.prevctx = Z180_INVALID_CONTEXT;
device->mmu.hwpagetable = device->mmu.defaultpagetable;
kgsl_setstate(device, KGSL_MMUFLAGS_PTUPDATE);
}
}
static void z180_power_stats(struct kgsl_device *device,
struct kgsl_power_stats *stats)
{
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
if (pwr->time == 0) {
pwr->time = ktime_to_us(ktime_get());
stats->total_time = 0;
stats->busy_time = 0;
} else {
s64 tmp;
tmp = ktime_to_us(ktime_get());
stats->total_time = tmp - pwr->time;
stats->busy_time = tmp - pwr->time;
pwr->time = tmp;
}
}
static void z180_irqctrl(struct kgsl_device *device, int state)
{
/* Control interrupts for Z180 and the Z180 MMU */
if (state) {
z180_regwrite(device, (ADDR_VGC_IRQENABLE >> 2), 3);
z180_regwrite(device, MH_INTERRUPT_MASK, KGSL_MMU_INT_MASK);
} else {
z180_regwrite(device, (ADDR_VGC_IRQENABLE >> 2), 0);
z180_regwrite(device, MH_INTERRUPT_MASK, 0);
}
}
static const struct kgsl_functable z180_functable = {
/* Mandatory functions */
.regread = z180_regread,
.regwrite = z180_regwrite,
.idle = z180_idle,
.isidle = z180_isidle,
.suspend_context = z180_suspend_context,
.start = z180_start,
.stop = z180_stop,
.getproperty = z180_getproperty,
.waittimestamp = z180_waittimestamp,
.readtimestamp = z180_readtimestamp,
.issueibcmds = z180_cmdstream_issueibcmds,
.setup_pt = z180_setup_pt,
.cleanup_pt = z180_cleanup_pt,
.power_stats = z180_power_stats,
.irqctrl = z180_irqctrl,
/* Optional functions */
.drawctxt_create = NULL,
.drawctxt_destroy = z180_drawctxt_destroy,
.ioctl = NULL,
};
static struct platform_device_id z180_id_table[] = {
{ DEVICE_2D0_NAME, (kernel_ulong_t)&device_2d0.dev, },
{ DEVICE_2D1_NAME, (kernel_ulong_t)&device_2d1.dev, },
{ },
};
MODULE_DEVICE_TABLE(platform, z180_id_table);
static struct platform_driver z180_platform_driver = {
.probe = z180_probe,
.remove = __devexit_p(z180_remove),
.suspend = kgsl_suspend_driver,
.resume = kgsl_resume_driver,
.id_table = z180_id_table,
.driver = {
.owner = THIS_MODULE,
.name = DEVICE_2D_NAME,
.pm = &kgsl_pm_ops,
}
};
static int __init kgsl_2d_init(void)
{
return platform_driver_register(&z180_platform_driver);
}
static void __exit kgsl_2d_exit(void)
{
platform_driver_unregister(&z180_platform_driver);
}
module_init(kgsl_2d_init);
module_exit(kgsl_2d_exit);
MODULE_DESCRIPTION("2D Graphics driver");
MODULE_VERSION("1.2");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:kgsl_2d");

Some files were not shown because too many files have changed in this diff Show More