ashmem: Fix ASHMEM_SET_PROT_MASK.
ashmem: Support lseek(2) in ashmem driver
ashmem: Fix the build failure when OUTER_CACHE is enabled
ashmem: Fix ashmem vm range comparison to stop roll-over
Zram currently uses LZO compression. With Snappy, it uses less CPU time and is
thus more useful. The sacrifice in compression ratio is small.
Zram's LZO and Snappy support can be independently enabled at compile time and
each zram device can switch between compression methods when unused.
When only a single compression method was enabled at compile time, no idirection
penalty is incurred.
http://driverdev.linuxdriverproject.org/pipermail/devel/2011-April/015114.html
The current code for PIO doesn't transfer whole data when data size
is not in multiple of 4 bytes. The last few bytes are not written to
the card resulting in no DATAEND interrupt from SDCC. This patch
allows data transfer for non-aligned data size in PIO mode.
Enable prog done interrupt for stop command(CMD12) that is sent
after a multi-block write(CMD25). The PROG_DONE bit is set when
the card has finished its programming and is ready for next data.
After every write request the card will be polled for ready status
using CMD13. For a multi-block write(CMD25) before sending CMD13,
stop command (CMD12) will be sent. If we enable prog done interrupt
for CMD12, then CMD13 polling can be avoided. The prog done interrupt
means that the card is done with its programming and is ready for
next request.
In the context of request processing thread, data mover lock is
acquired after the host lock. In another context, in the completion
handler of data mover the locks are acquired in the reverse order,
resulting in possible circular lock dependency warning. Hence,
schedule a tasklet to process the dma completion so as to avoid
nested locks.
CONFIG_MMC_MSM7X00A_RESUME_IN_WQ and CONFIG_MMC_EMBEDDED_SDIO don't exist
in Kconfig and is never defined anywhere else, therefore removing all
references for it from the source code.
fudgeswap acts as follows:
If set to non zero (defualt is 512k):
Check for the amount of SWAP_FREE space avalible
If > 0KB is avalible:
if fudgeswap > swapfree:
other_file += swapfree
else:
other_file += fugeswap
In short: we will add in fugeswap as long as its less then the free swap
Setting this to a very large positive number will indicate swap ought
to be fully used as free (and will slow the system down)
smaller numbers will allow you to put some pressure on SWAP without
slowing the system down as much.
small negitive numbers will allow the system to be faster at the same
minfree level.
default is 512 to give a very little bit of pressure to use some swap
but this can be modified at runtime via:
/sys/module/lowmemorykiller/parameters/fugeswap
originally by ezterry
Please enter the commit message for your changes. Lines starting
staging: logger: hold mutex while removing reader
staging: android: logger: clarify non-update of w_off in do_write_log_from_user
staging: android: logger: clarify code in clock_interval
staging: android: logger: reorder prepare_to_wait and mutex_lock
staging: android: logger: simplify and optimize get_entry_len
staging: android: logger: Change logger_offset() from macro to function
Staging: android: fixed white spaces coding style issue in logger.c
android: logger: bump up the logger buffer sizes
android, lowmemorykiller: remove task handoff notifier
staging: android: lowmemorykiller: Fix task_struct leak
staging: android/lowmemorykiller: Don't unregister notifier from atomic context
staging: android, lowmemorykiller: convert to use oom_score_adj
staging: android/lowmemorykiller: Do not kill kernel threads
staging: android/lowmemorykiller: No need for task->signal check
staging: android/lowmemorykiller: Better mm handling
staging: android/lowmemorykiller: Don't grab tasklist_lock
staging: android: lowmemorykiller: Don't wait more than one second for a process to die
Staging: android: fixed 80 characters warnings in lowmemorykiller.c
staging: android: lowmemorykiller: Ignore shmem pages in page-cache
staging: android: lowmemorykiller: Remove bitrotted codepath
staging: android: lowmemkiller: Substantially reduce overhead during reclaim
staging: android: lowmemorykiller: Don't try to kill the same pid over and over
Staging: android: binder: Fix crashes when sharing a binder file between processes
drivers:staging:android Typos: fix some comments that have typos in them.
fs: Remove missed ->fds_bits from cessation use of fd_set structs internally
Staging:android: Change type for binder_debug_no_lock switch to bool
Staging: android: binder: Fix use-after-free bug
Separate ib parse checking from cffdump as it is useful
in other situations. This is controlled by a new debugfs
file, ib_check. All ib checking is off (0) by default,
because parsing and mem_entry lookup can have a performance
impact on some benchmarks. Level 1 checking verifies the
IB1's. Level 2 checking also verifies the IB2.
Add nop packets in ringbuffer at the start and end of IB buffers
subnmitted by user space driver. These nop packets serve as markers
that can be used during replay, recovery, and snapshot to get valid
data for a GPU hang dump
User memory needs to be zeroed out before it is sent to the user.
To do this, the kernel maps the page, memsets it to zero and then
unmaps it. By virtue of mapping it, this forces us to flush the
dcache to ensure cache coherency between kernel and user mappings.
Originally, the page_alloc loop was using GFP_ZERO (which does a
map, memset, and unmap for each individual page) and then we were
additionally calling flush_dcache_page() for each page killing us
on performance. It is far more efficient, especially for large
allocations (> 1MB), to allocate the pages without GFP_ZERO and
then to vmap the entire allocation, memset it to zero, flush the
cache and then unmap. This process is slightly slower for very
small allocations, but only by a few microseconds, and is well
within the margin of acceptability. In all, the new scheme is
faster than the default for all sizes greater than 16k, and is
almost 4X faster for 2MB and 4MB allocations which are common for
textures and very large buffer objects.
The downside is that if there isn't enough vmalloc room for the
allocation that we are forced to fallback to a slow page by
page memset/flush, but this should happen rarely (if at all) and
is only included for completeness.
Add a guard page on the backside of page_alloc MMU mappings to protect
against an over zealous GPU pre-fetch engine that sometimes oversteps the
end of the mapped region. The same phsyical page can be re-used for each
mapping so we only need to allocate one phsyical page to rule them all
and in the darkness bind them.
Change the vmalloc allocation name to something more appropriate since
we do not allocate memory using vmalloc for userspace driver. We
directly allocate physical pages and map that to user address space. The
name is changed to page_alloc instead of vmalloc. Add sysfs files to
track memory usage via both vmalloc and page_alloc.
Memory mapped through kgsl_mmu_map_global() is supposed to have
the same gpu address in all pagetables. And the memdesc will
persist beyond the lifetime of any single pagetable.
Therefore, memdesc->gpuaddr should not be zeroed for these
memdescs.
Given a pagetable base and a GPU address, find the struct kgsl_mem_entry
that matches the object. Move this functionality out from inside another
function and promote it to top level so it can be used by upcoming
functionality.
Previously, memory objects assumed that they remained attached to a
process until they are destroyed. In the past this was mostly true,
but worked by luck because a process could technically map the memory
and then close the file descriptor which would eventually explode. Now we
do the process related cleanup (MMU unmap, fixup statistics) when the
object is released from the process so the process can go away without
affecting the other holders of the mem object refcount.