/* * include/asm-xtensa/uaccess.h * * User space memory access functions * * These routines provide basic accessing functions to the user memory * space for the kernel. This header file provides functions such as: * * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * Copyright (C) 2001 - 2005 Tensilica Inc.
*/
/* * These are the main single-value transfer routines. They * automatically use the right size if we just have the right pointer * type. * * This gets kind of ugly. We want to return _two_ values in * "get_user()" and yet we don't want to do any pointers, because that * is too much of a performance impact. Thus we have a few rather ugly * macros here, and hide all the uglyness from the user. * * Careful to not * (a) re-use the arguments for side effects (sizeof is ok) * (b) require any knowledge of processes at this stage
*/ #define put_user(x, ptr) __put_user_check((x), (ptr), sizeof(*(ptr))) #define get_user(x, ptr) __get_user_check((x), (ptr), sizeof(*(ptr)))
/* * The "__xxx" versions of the user access functions are versions that * do not verify the address space, that must have been done previously * with a separate "access_ok()" call (this is used when we do multiple * accesses to the same area of user memory).
*/ #define __put_user(x, ptr) __put_user_nocheck((x), (ptr), sizeof(*(ptr))) #define __get_user(x, ptr) __get_user_nocheck((x), (ptr), sizeof(*(ptr)))
/* * Consider a case of a user single load/store would cause both an * unaligned exception and an MMU-related exception (unaligned * exceptions happen first): * * User code passes a bad variable ptr to a system call. * Kernel tries to access the variable. * Unaligned exception occurs. * Unaligned exception handler tries to make aligned accesses. * Double exception occurs for MMU-related cause (e.g., page not mapped). * do_page_fault() thinks the fault address belongs to the kernel, not the * user, and panics. * * The kernel currently prohibits user unaligned accesses. We use the * __check_align_* macros to check for unaligned addresses before * accessing user space so we don't crash the kernel. Both * __put_user_asm and __get_user_asm use these alignment macros, so * macro-specific labels such as 0f, 1f, %0, %2, and %3 must stay in * sync.
*/
/* * We need to return the number of bytes not cleared. Our memset() * returns zero if a problem occurs while accessing user-space memory. * In that event, return no memory cleared. Otherwise, zero for * success.
*/
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung und die Messung sind noch experimentell.