futex_wait_cancelable_wait()方法

futex_wait_cancelable_wait()方法国内几乎没啥说的,公司很多网站很多还不让上,用手机谷歌还可能会被领导批评,寻思回家查,还强制加班下班还很晚,这是又要马儿跑又要马儿不吃草啊。源码中这块是这么写的+#defineFUTEX_PRIVATE_FLAG 128+#defineFUTEX_CMD_MASK ~FUTEX_PRIVATE_FLAG++#defineFUTEX_WAIT_PRIVATE (FUTEX_WAIT…

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全系列IDE使用 1年只要46元 售后保障 童叟无欺

国内几乎没啥说的,公司很多网站很多还不让上,用手机谷歌还可能会被领导批评,寻思回家查,还强制加班下班还很晚,这是又要马儿跑又要马儿不吃草啊。
源码中这块是这么写的

+#define FUTEX_PRIVATE_FLAG	128
+#define FUTEX_CMD_MASK		~FUTEX_PRIVATE_FLAG
+
+#define FUTEX_WAIT_PRIVATE	(FUTEX_WAIT | FUTEX_PRIVATE_FLAG)
+#define FUTEX_WAKE_PRIVATE	(FUTEX_WAKE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_REQUEUE_PRIVATE	(FUTEX_REQUEUE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_CMP_REQUEUE_PRIVATE (FUTEX_CMP_REQUEUE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_WAKE_OP_PRIVATE	(FUTEX_WAKE_OP | FUTEX_PRIVATE_FLAG)
+#define FUTEX_LOCK_PI_PRIVATE	(FUTEX_LOCK_PI | FUTEX_PRIVATE_FLAG)
+#define FUTEX_UNLOCK_PI_PRIVATE	(FUTEX_UNLOCK_PI | FUTEX_PRIVATE_FLAG)
+#define FUTEX_TRYLOCK_PI_PRIVATE (FUTEX_TRYLOCK_PI | FUTEX_PRIVATE_FLAG)

具体的FUTEX_PRIVATE_FLAG作用用man2来查,
https://lwn.net/Articles/229668/
公司的电脑不能访问这个网站,我就把内容贴出来,应该不算侵权吧
Hi all

I’m pleased to present this patch which improves linux futexes performance and
scalability, merely avoiding taking mmap_sem rwlock.

Ulrich agreed with the API and said glibc work could start as soon
as he gets a Fedora kernel with it ?

Andrew, could we get this in mm as well ? This version is against 2.6.21-rc5-mm4
(so supports 64bit futexes)

In this third version I dropped the NUMA optims and process private hash table,
to let new API come in and be tested.

Thank you

[PATCH] FUTEX : new PRIVATE futexes

Analysis of current linux futex code :

A central hash table futex_queues[] holds all contexts (futex_q) of waiting
threads.
Each futex_wait()/futex_wait() has to obtain a spinlock on a hash slot to
perform lookups or insert/deletion of a futex_q.

When a futex_wait() is done, calling thread has to :

    • Obtain a read lock on mmap_sem to be able to validate the user pointer
      Â Â Â (calling find_vma()). This validation tells us if the futex uses
      Â Â Â an inode based store (mapped file), or mm based store (anonymous mem)
    • compute a hash key
    • Atomic increment of reference counter on an inode or a mm_struct
    • lock part of futex_queues[] hash table
    • perform the test on value of futex.
      Â Â Â Â Â Â Â Â (rollback is value != expected_value, returns EWOULDBLOCK)
      Â Â Â Â (various loops if test triggers mm faults)
  1. queue the context into hash table, release the lock got in 4)

    • release the read_lock on mmap_sem

 Â

  1. Eventually unqueue the context (but rarely, as this part
    Â may be done by the futex_wake())

Futexes were designed to improve scalability but current implementation
has various problems :

  • Central hashtable :
    Â This means scalability problems if many processes/threads want to use
    Â futexes at the same time.
    Â This means NUMA unbalance because this hashtable is located on one node.

  • Using mmap_sem on every futex() syscall :

 Even if mmap_sem is a rw_semaphore, up_read()/down_read() are doing atomic
 ops on mmap_sem, dirtying cache line :
    – lot of cache line ping pongs on SMP configurations.

 mmap_sem is also extensively used by mm code (page faults, mmap()/munmap())
 Highly threaded processes might suffer from mmap_sem contention.

 mmap_sem is also used by oprofile code. Enabling oprofile hurts threaded
programs because of contention on the mmap_sem cache line.

  • Using an atomic_inc()/atomic_dec() on inode ref counter or mm ref counter:
    Â It’s also a cache line ping pong on SMP. It also increases mmap_sem hold time
    Â because of cache misses.

Most of these scalability problems come from the fact that futexes are in
one global namespace. As we use a central hash table, we must make sure
they are all using the same reference (given by the mm subsystem).
We chose to force all futexes be ‘shared’. This has a cost.

But fact is POSIX defined PRIVATE and SHARED, allowing clear separation, and
optimal performance if carefuly implemented. Time has come for linux to have
better threading performance.

The goal is to permit new futex commands to avoid :
 – Taking the mmap_sem semaphore, conflicting with other subsystems.
 – Modifying a ref_count on mm or an inode, still conflicting with mm or fs.

This is possible because, for one process using PTHREAD_PROCESS_PRIVATE
futexes, we only need to distinguish futexes by their virtual address, no
matter the underlying mm storage is.

If glibc wants to exploit this new infrastructure, it should use new
_PRIVATE futex subcommands for PTHREAD_PROCESS_PRIVATE futexes. And
be prepared to fallback on old subcommands for old kernels. Using one
global variable with the FUTEX_PRIVATE_FLAG or 0 value should be OK.

PTHREAD_PROCESS_SHARED futexes should still use the old subcommands.

Compatibility with old applications is preserved, they still hit the
scalability problems, but new applications can fly ?

Note : the same SHARED futex (mapped on a file) can be used by old binaries
and new binaries, because both binaries will use the old subcommands.

Note : Vast majority of futexes should be using PROCESS_PRIVATE semantic,
as this is the default semantic. Almost all applications should benefit
of this changes (new kernel and updated libc)

Some bench results on a Pentium M 1.6 GHz (SMP kernel on a UP machine)

/* calling futex_wait(addr, value) with value != *addr */
434 cycles per futex(FUTEX_WAIT) call (mixing 2 futexes)
427 cycles per futex(FUTEX_WAIT) call (using one futex)
345 cycles per futex(FUTEX_WAIT_PRIVATE) call (mixing 2 futexes)
345 cycles per futex(FUTEX_WAIT_PRIVATE) call (using one futex)
For reference :
187 cycles per getppid() call
188 cycles per umask() call
183 cycles per ni_syscall() call

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
---
 include/linux/futex.h |   45 +++++-
 kernel/futex.c        |  294 +++++++++++++++++++++++++---------------
 2 files changed, 230 insertions(+), 109 deletions(-)

--- linux-2.6.21-rc5-mm4/include/linux/futex.h
+++ linux-2.6.21-rc5-mm4-ed/include/linux/futex.h
@@ -19,6 +19,18 @@ union ktime;
 #define FUTEX_TRYLOCK_PI 8
 #define FUTEX_CMP_REQUEUE_PI 9
 
+#define FUTEX_PRIVATE_FLAG	128
+#define FUTEX_CMD_MASK		~FUTEX_PRIVATE_FLAG
+
+#define FUTEX_WAIT_PRIVATE	(FUTEX_WAIT | FUTEX_PRIVATE_FLAG)
+#define FUTEX_WAKE_PRIVATE	(FUTEX_WAKE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_REQUEUE_PRIVATE	(FUTEX_REQUEUE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_CMP_REQUEUE_PRIVATE (FUTEX_CMP_REQUEUE | FUTEX_PRIVATE_FLAG)
+#define FUTEX_WAKE_OP_PRIVATE	(FUTEX_WAKE_OP | FUTEX_PRIVATE_FLAG)
+#define FUTEX_LOCK_PI_PRIVATE	(FUTEX_LOCK_PI | FUTEX_PRIVATE_FLAG)
+#define FUTEX_UNLOCK_PI_PRIVATE	(FUTEX_UNLOCK_PI | FUTEX_PRIVATE_FLAG)
+#define FUTEX_TRYLOCK_PI_PRIVATE (FUTEX_TRYLOCK_PI | FUTEX_PRIVATE_FLAG)
+
 /* * Support for robust futexes: the kernel cleans up held futexes at * thread exit time. @@ -115,8 +127,18 @@ handle_futex_death(u32 __user *uaddr, st * Don't rearrange members without looking at hash_futex(). * * offset is aligned to a multiple of sizeof(u32) (== 4) by definition. - * We set bit 0 to indicate if it's an inode-based key. + * We use the two low order bits of offset to tell what is the kind of key : + * 00 : Private process futex (PTHREAD_PROCESS_PRIVATE) + * (no reference on an inode or mm) + * 01 : Shared futex (PTHREAD_PROCESS_SHARED) + * mapped on a file (reference on the underlying inode) + * 10 : Shared futex (PTHREAD_PROCESS_SHARED) + * (but private mapping on an mm, and reference taken on it) */
+
+#define FUT_OFF_INODE    1 /* We set bit 0 if key has a reference on inode */
+#define FUT_OFF_MMSHARED 2 /* We set bit 1 if key has a reference on mm */
+
 union futex_key { 
   
 	unsigned long __user *uaddr;
 	struct { 
   
@@ -135,8 +157,27 @@ union futex_key { 
   
 		int offset;
 	} both;
 };
-int get_futex_key(void __user *uaddr, union futex_key *key);
+
+/** + * get_futex_key - Get parameters which are the keys for a futex. + * @uaddr: virtual address of the futex + * @shared: NULL for a PROCESS_PRIVATE futex, + * &current->mm->mmap_sem for a PROCESS_SHARED futex + * @key: address where result is stored. + * + * Returns an error code or 0 + */
+int get_futex_key(void __user *uaddr, union futex_key *key,
+		  struct rw_semaphore *shared);
+
+/** + * get_futex_key_refs - Take a reference to the resource addressed by a key + */
 void get_futex_key_refs(union futex_key *key);
+
+/** + * drop_futex_key_refs - Drop a reference to the resource addressed by a key. + */
 void drop_futex_key_refs(union futex_key *key);
 
 #ifdef CONFIG_FUTEX
--- linux-2.6.21-rc5-mm4/kernel/futex.c
+++ linux-2.6.21-rc5-mm4-ed/kernel/futex.c
@@ -16,6 +16,9 @@
  *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
  *  Copyright (C) 2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
  *
+ *  PRIVATE futexes by Eric Dumazet
+ *  Copyright (C) 2007 Eric Dumazet <dada1@cosmosbay.com>
+ *
  *  Thanks to Ben LaHaise for yelling "hashed waitqueues" loudly
  *  enough at me, Linus for the original (flawed) idea, Matthew
  *  Kirkwood for proof-of-concept implementation.
@@ -199,9 +202,12 @@ static inline int match_futex(union fute
  * Returns: 0, or negative error code.
  * The key words are stored in *key on success.
  *
- * Should be called with &current->mm->mmap_sem but NOT any spinlocks.
+ * shared is NULL for PROCESS_PRIVATE futexes
+ * For other futexes, it points to &current->mm->mmap_sem and
+ * caller must have taken the reader lock. but NOT any spinlocks.
  */
-int get_futex_key(void __user *uaddr, union futex_key *key)
+int get_futex_key(void __user *uaddr, union futex_key *key,
+	struct rw_semaphore *shared)
 { 
   
 	unsigned long address = (unsigned long)uaddr;
 	struct mm_struct *mm = current->mm;
@@ -218,6 +224,22 @@ int get_futex_key(void __user *uaddr, un
 	address -= key->both.offset;
 
 	/* + * PROCESS_PRIVATE futexes are fast. + * As the mm cannot disappear under us and the 'key' only needs + * virtual address, we dont even have to find the underlying vma. + * Note : We do have to check 'address' is a valid user address, + * but access_ok() should be faster than find_vma() + * Note : At this point, address points to the start of page, + * not the real futex address, this is ok. + */
+	if (!shared) { 
   
+		if (!access_ok(VERIFY_WRITE, address, sizeof(int)))
+			return -EFAULT;
+		key->private.mm = mm;
+		key->private.address = address;
+		return 0;
+	}
+	/* * The futex is hashed differently depending on whether * it's in a shared or private mapping. So check vma first. */
@@ -244,6 +266,7 @@ int get_futex_key(void __user *uaddr, un
 	 * mappings of _writable_ handles.
 	 */
 	if (likely(!(vma->vm_flags & VM_MAYSHARE))) { 
   
+		key->both.offset += FUT_OFF_MMSHARED; /* reference taken on mm */
 		key->private.mm = mm;
 		key->private.address = address;
 		return 0;
@@ -253,7 +276,7 @@ int get_futex_key(void __user *uaddr, un
 	 * Linear file mappings are also simple.
 	 */
 	key->shared.inode = vma->vm_file->f_path.dentry->d_inode;
-	key->both.offset++; /* Bit 0 of offset indicates inode-based key. */
+	key->both.offset += FUT_OFF_INODE; /* inode-based key. */
 	if (likely(!(vma->vm_flags & VM_NONLINEAR))) { 
   
 		key->shared.pgoff = (((address - vma->vm_start) >> PAGE_SHIFT)
 				     + vma->vm_pgoff);
@@ -281,17 +304,19 @@ EXPORT_SYMBOL_GPL(get_futex_key);
  * Take a reference to the resource addressed by a key.
  * Can be called while holding spinlocks.
  *
- * NOTE: mmap_sem MUST be held between get_futex_key() and calling this
- * function, if it is called at all.  mmap_sem keeps key->shared.inode valid.
  */
 inline void get_futex_key_refs(union futex_key *key)
 { 
   
-	if (key->both.ptr != 0) { 
   
-		if (key->both.offset & 1)
+	if (key->both.ptr == 0)
+		return;
+	switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 
   
+		case FUT_OFF_INODE:
 			atomic_inc(&key->shared.inode->i_count);
-		else
+			break;
+		case FUT_OFF_MMSHARED:
 			atomic_inc(&key->private.mm->mm_count);
-	}
+			break;
+  	}
 }
 EXPORT_SYMBOL_GPL(get_futex_key_refs);
 
@@ -301,11 +326,15 @@ EXPORT_SYMBOL_GPL(get_futex_key_refs);
  */
 void drop_futex_key_refs(union futex_key *key)
 { 
   
-	if (key->both.ptr != 0) { 
   
-		if (key->both.offset & 1)
+	if (key->both.ptr == 0)
+		return;
+	switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 
   
+		case FUT_OFF_INODE:
 			iput(key->shared.inode);
-		else
+			break;
+		case FUT_OFF_MMSHARED:
 			mmdrop(key->private.mm);
+			break;
 	}
 }
 EXPORT_SYMBOL_GPL(drop_futex_key_refs);
@@ -339,28 +368,40 @@ get_futex_value_locked(unsigned long *de
 }
 
 /* - * Fault handling. Called with current->mm->mmap_sem held. + * Fault handling. + * if shared is non NULL, current->mm->mmap_sem is already held */
-static int futex_handle_fault(unsigned long address, int attempt)
+static int futex_handle_fault(unsigned long address, int attempt,
+	struct rw_semaphore *shared)
 { 
   
 	struct vm_area_struct * vma;
 	struct mm_struct *mm = current->mm;
+	int ret = 0;
 
-	if (attempt > 2 || !(vma = find_vma(mm, address)) ||
-	    vma->vm_start > address || !(vma->vm_flags & VM_WRITE))
+	if (attempt > 2)
 		return -EFAULT;
 
-	switch (handle_mm_fault(mm, vma, address, 1)) { 
   
-	case VM_FAULT_MINOR:
-		current->min_flt++;
-		break;
-	case VM_FAULT_MAJOR:
-		current->maj_flt++;
-		break;
-	default:
-		return -EFAULT;
-	}
-	return 0;
+	if (!shared)
+		down_read(&mm->mmap_sem);
+
+	if (!(vma = find_vma(mm, address)) ||
+	    vma->vm_start > address || !(vma->vm_flags & VM_WRITE))
+		ret = -EFAULT;
+
+	else
+		switch (handle_mm_fault(mm, vma, address, 1)) { 
   
+		case VM_FAULT_MINOR:
+			current->min_flt++;
+			break;
+		case VM_FAULT_MAJOR:
+			current->maj_flt++;
+			break;
+		default:
+			ret = -EFAULT;
+		}
+	if (!shared)
+		up_read(&mm->mmap_sem);
+	return ret;
 }
 
 /* @@ -705,7 +746,8 @@ double_lock_hb(struct futex_hash_bucket * Wake up all waiters hashed on the physical page that is mapped * to this virtual address: */
-static int futex_wake(unsigned long __user *uaddr, int nr_wake)
+static int futex_wake(unsigned long __user *uaddr, int nr_wake,
+	struct rw_semaphore *shared)
 { 
   
 	struct futex_hash_bucket *hb;
 	struct futex_q *this, *next;
@@ -713,9 +755,10 @@ static int futex_wake(unsigned long __us
 	union futex_key key;
 	int ret;
 
-	down_read(&current->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr, &key);
+	ret = get_futex_key(uaddr, &key, shared);
 	if (unlikely(ret != 0))
 		goto out;
 
@@ -737,7 +780,8 @@ static int futex_wake(unsigned long __us
 
 	spin_unlock(&hb->lock);
 out:
-	up_read(&current->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 	return ret;
 }
 
@@ -807,7 +851,8 @@ retry:
  */
 static int
 futex_requeue_pi(unsigned long __user *uaddr1, unsigned long __user *uaddr2,
-		 int nr_wake, int nr_requeue, unsigned long *cmpval, int futex64)
+		 int nr_wake, int nr_requeue, unsigned long *cmpval, int futex64,
+		 struct rw_semaphore *shared)
 { 
   
 	union futex_key key1, key2;
 	struct futex_hash_bucket *hb1, *hb2;
@@ -825,12 +870,13 @@ retry:
 	/* * First take all the futex related locks: */
-	down_read(&current->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr1, &key1);
+	ret = get_futex_key(uaddr1, &key1, shared);
 	if (unlikely(ret != 0))
 		goto out;
-	ret = get_futex_key(uaddr2, &key2);
+	ret = get_futex_key(uaddr2, &key2, shared);
 	if (unlikely(ret != 0))
 		goto out;
 
@@ -998,7 +1044,8 @@ out:
  */
 static int
 futex_wake_op(unsigned long __user *uaddr1, unsigned long __user *uaddr2,
-	      int nr_wake, int nr_wake2, int op, int futex64)
+	      int nr_wake, int nr_wake2, int op, int futex64,
+		struct rw_semaphore *shared)
 { 
   
 	union futex_key key1, key2;
 	struct futex_hash_bucket *hb1, *hb2;
@@ -1007,12 +1054,13 @@ futex_wake_op(unsigned long __user *uadd
 	int ret, op_ret, attempt = 0;
 
 retryfull:
-	down_read(&current->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr1, &key1);
+	ret = get_futex_key(uaddr1, &key1, shared);
 	if (unlikely(ret != 0))
 		goto out;
-	ret = get_futex_key(uaddr2, &key2);
+	ret = get_futex_key(uaddr2, &key2, shared);
 	if (unlikely(ret != 0))
 		goto out;
 
@@ -1055,15 +1103,14 @@ retry:
 		 * futex_atomic_op_inuser needs to both read and write
 		 * *(int __user *)uaddr2, but we can't modify it
 		 * non-atomically.  Therefore, if get_user below is not
-		 * enough, we need to handle the fault ourselves, while
-		 * still holding the mmap_sem.
+		 * enough, we need to handle the fault ourselves. Make 
+		 * sure we hold mmap_sem.
 		 */
 		if (attempt++) { 
   
-			if (futex_handle_fault((unsigned long)uaddr2,
-						attempt)) { 
   
-				ret = -EFAULT;
+			ret = futex_handle_fault((unsigned long)uaddr2,
+						attempt, shared);
+			if (ret)
 				goto out;
-			}
 			goto retry;
 		}
 
@@ -1071,7 +1118,8 @@ retry:
 		 * If we would have faulted, release mmap_sem,
 		 * fault it in and start all over again.
 		 */
-		up_read(&current->mm->mmap_sem);
+		if (shared)
+			up_read(shared);
 
 		ret = futex_get_user(&dummy, uaddr2, futex64);
 		if (ret)
@@ -1108,7 +1156,8 @@ retry:
 	if (hb1 != hb2)
 		spin_unlock(&hb2->lock);
 out:
-	up_read(&current->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 	return ret;
 }
 
@@ -1118,7 +1167,8 @@ out:
  */
 static int
 futex_requeue(unsigned long __user *uaddr1, unsigned long __user *uaddr2,
-	      int nr_wake, int nr_requeue, unsigned long *cmpval, int futex64)
+	      int nr_wake, int nr_requeue, unsigned long *cmpval, int futex64,
+		struct rw_semaphore *shared)
 { 
   
 	union futex_key key1, key2;
 	struct futex_hash_bucket *hb1, *hb2;
@@ -1127,12 +1177,13 @@ futex_requeue(unsigned long __user *uadd
 	int ret, drop_count = 0;
 
  retry:
-	down_read(&current->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr1, &key1);
+	ret = get_futex_key(uaddr1, &key1, shared);
 	if (unlikely(ret != 0))
 		goto out;
-	ret = get_futex_key(uaddr2, &key2);
+	ret = get_futex_key(uaddr2, &key2, shared);
 	if (unlikely(ret != 0))
 		goto out;
 
@@ -1155,7 +1206,8 @@ futex_requeue(unsigned long __user *uadd
 			 * If we would have faulted, release mmap_sem, fault
 			 * it in and start all over again.
 			 */
-			up_read(&current->mm->mmap_sem);
+			if (shared)
+				up_read(shared);
 
 			ret = futex_get_user(&curval, uaddr1, futex64);
 
@@ -1208,7 +1260,8 @@ out_unlock:
 		drop_futex_key_refs(&key1);
 
 out:
-	up_read(&current->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 	return ret;
 }
 
@@ -1392,7 +1445,8 @@ static int fixup_pi_state_owner(unsigned
 
 static long futex_wait_restart(struct restart_block *restart);
 static int futex_wait(unsigned long __user *uaddr, unsigned long val,
-		      ktime_t *abs_time, int futex64)
+		      ktime_t *abs_time, int futex64,
+		      struct rw_semaphore *shared)
 { 
   
 	struct task_struct *curr = current;
 	DECLARE_WAITQUEUE(wait, curr);
@@ -1405,9 +1459,10 @@ static int futex_wait(unsigned long __us
 
 	q.pi_state = NULL;
  retry:
-	down_read(&curr->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr, &q.key);
+	ret = get_futex_key(uaddr, &q.key, shared);
 	if (unlikely(ret != 0))
 		goto out_release_sem;
 
@@ -1430,8 +1485,8 @@ static int futex_wait(unsigned long __us
 	 * a wakeup when *uaddr != val on entry to the syscall.  This is
 	 * rare, but normal.
 	 *
-	 * We hold the mmap semaphore, so the mapping cannot have changed
-	 * since we looked it up in get_futex_key.
+	 * for shared futexes, we hold the mmap semaphore, so the mapping
+	 * cannot have changed since we looked it up in get_futex_key.
 	 */
 	ret = get_futex_value_locked(&uval, uaddr, futex64);
 
@@ -1442,7 +1497,8 @@ static int futex_wait(unsigned long __us
 		 * If we would have faulted, release mmap_sem, fault it in and
 		 * start all over again.
 		 */
-		up_read(&curr->mm->mmap_sem);
+		if (shared)
+			up_read(shared);
 		ret = futex_get_user(&uval, uaddr, futex64);
 
 		if (!ret)
@@ -1468,7 +1524,8 @@ static int futex_wait(unsigned long __us
 	 * Now the futex is queued and we have checked the data, we
 	 * don't want to hold mmap_sem while we sleep.
 	 */
-	up_read(&curr->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 
 	/* * There might have been scheduling since the queue_me(), as we @@ -1568,7 +1625,8 @@ static int futex_wait(unsigned long __us } /* Unqueue and drop the lock */
 			unqueue_me_pi(&q);
-			up_read(&curr->mm->mmap_sem);
+			if (shared)
+				up_read(shared);
 		}
 
 		debug_rt_mutex_free_waiter(&q.waiter);
@@ -1598,6 +1656,8 @@ static int futex_wait(unsigned long __us
 		restart->arg1 = val;
 		restart->arg2 = (unsigned long)abs_time;
 		restart->arg3 = (unsigned long)futex64;
+		if (shared)
+			restart->arg3 |= 2;
 		return -ERESTART_RESTARTBLOCK;
 	}
 
@@ -1605,7 +1665,8 @@ static int futex_wait(unsigned long __us
 	queue_unlock(&q, hb);
 
  out_release_sem:
-	up_read(&curr->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 	return ret;
 }
 
@@ -1615,10 +1676,11 @@ static long futex_wait_restart(struct re
 	unsigned long __user *uaddr = (unsigned long __user *)restart->arg0;
 	unsigned long val = restart->arg1;
 	ktime_t *abs_time = (ktime_t *)restart->arg2;
-	int futex64 = (int)restart->arg3;
+	int futex64 = (int)restart->arg3 & 1 ;
+	struct rw_semaphore *shared = (restart->arg3 & 2) ? &current->mm->mmap_sem : NULL;
 
 	restart->fn = do_no_restart_syscall;
-	return (long)futex_wait(uaddr, val, abs_time, futex64);
+	return (long)futex_wait(uaddr, val, abs_time, futex64, shared);
 }
 
 
@@ -1674,7 +1736,7 @@ static void set_pi_futex_owner(struct fu
  * races the kernel might see a 0 value of the futex too.)
  */
 static int futex_lock_pi(unsigned long __user *uaddr, int detect, ktime_t *time,
-			 int trylock, int futex64)
+			 int trylock, int futex64, struct rw_semaphore *shared)
 { 
   
 	struct hrtimer_sleeper timeout, *to = NULL;
 	struct task_struct *curr = current;
@@ -1695,9 +1757,10 @@ static int futex_lock_pi(unsigned long _
 
 	q.pi_state = NULL;
  retry:
-	down_read(&curr->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr, &q.key);
+	ret = get_futex_key(uaddr, &q.key, shared);
 	if (unlikely(ret != 0))
 		goto out_release_sem;
 
@@ -1818,7 +1881,8 @@ static int futex_lock_pi(unsigned long _
 	 * Now the futex is queued and we have checked the data, we
 	 * don't want to hold mmap_sem while we sleep.
 	 */
-	up_read(&curr->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 
 	WARN_ON(!q.pi_state);
 	/* @@ -1832,7 +1896,8 @@ static int futex_lock_pi(unsigned long _ ret = ret ? 0 : -EWOULDBLOCK; } - down_read(&curr->mm->mmap_sem); + if (shared) + down_read(shared); spin_lock(q.lock_ptr); /* @@ -1854,7 +1919,8 @@ static int futex_lock_pi(unsigned long _ } /* Unqueue and drop the lock */
 		unqueue_me_pi(&q);
-		up_read(&curr->mm->mmap_sem);
+		if (shared)
+			up_read(shared);
 	}
 
 	if (!detect && ret == -EDEADLK && 0)
@@ -1866,7 +1932,8 @@ static int futex_lock_pi(unsigned long _
 	queue_unlock(&q, hb);
 
  out_release_sem:
-	up_read(&curr->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 	return ret;
 
  uaddr_faulted:
@@ -1877,15 +1944,15 @@ static int futex_lock_pi(unsigned long _
 	 * still holding the mmap_sem.
 	 */
 	if (attempt++) { 
   
-		if (futex_handle_fault((unsigned long)uaddr, attempt)) { 
   
-			ret = -EFAULT;
+		ret = futex_handle_fault((unsigned long)uaddr, attempt, shared);
+		if (ret)
 			goto out_unlock_release_sem;
-		}
 		goto retry_locked;
 	}
 
 	queue_unlock(&q, hb);
-	up_read(&curr->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 
 	ret = futex_get_user(&uval, uaddr, futex64);
 	if (!ret && (uval != -EFAULT))
@@ -1899,7 +1966,8 @@ static int futex_lock_pi(unsigned long _
  * This is the in-kernel slowpath: we look up the PI state (if any),
  * and do the rt-mutex unlock.
  */
-static int futex_unlock_pi(unsigned long __user *uaddr, int futex64)
+static int futex_unlock_pi(unsigned long __user *uaddr, int futex64,
+	struct rw_semaphore *shared)
 { 
   
 	struct futex_hash_bucket *hb;
 	struct futex_q *this, *next;
@@ -1919,9 +1987,10 @@ retry:
 	/* * First take all the futex related locks: */
-	down_read(&current->mm->mmap_sem);
+	if (shared)
+		down_read(shared);
 
-	ret = get_futex_key(uaddr, &key);
+	ret = get_futex_key(uaddr, &key, shared);
 	if (unlikely(ret != 0))
 		goto out;
 
@@ -1980,7 +2049,8 @@ retry_locked:
 out_unlock:
 	spin_unlock(&hb->lock);
 out:
-	up_read(&current->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 
 	return ret;
 
@@ -1992,15 +2062,15 @@ pi_faulted:
 	 * still holding the mmap_sem.
 	 */
 	if (attempt++) { 
   
-		if (futex_handle_fault((unsigned long)uaddr, attempt)) { 
   
-			ret = -EFAULT;
+		ret = futex_handle_fault((unsigned long)uaddr, attempt, shared);
+		if (ret)
 			goto out_unlock;
-		}
 		goto retry_locked;
 	}
 
 	spin_unlock(&hb->lock);
-	up_read(&current->mm->mmap_sem);
+	if (shared)
+		up_read(shared);
 
 	ret = futex_get_user(&uval, uaddr, futex64);
 	if (!ret && (uval != -EFAULT))
@@ -2052,6 +2122,7 @@ static int futex_fd(u32 __user *uaddr, i
 	struct futex_q *q;
 	struct file *filp;
 	int ret, err;
+	struct rw_semaphore *shared;
 	static unsigned long printk_interval;
 
 	if (printk_timed_ratelimit(&printk_interval, 60 * 60 * 1000)) { 
   
@@ -2093,11 +2164,12 @@ static int futex_fd(u32 __user *uaddr, i
 	}
 	q->pi_state = NULL;
 
-	down_read(&current->mm->mmap_sem);
-	err = get_futex_key(uaddr, &q->key);
+	shared = &current->mm->mmap_sem;
+	down_read(shared);
+	err = get_futex_key(uaddr, &q->key, shared);
 
 	if (unlikely(err != 0)) { 
   
-		up_read(&current->mm->mmap_sem);
+		up_read(shared);
 		kfree(q);
 		goto error;
 	}
@@ -2109,7 +2181,7 @@ static int futex_fd(u32 __user *uaddr, i
 	filp->private_data = q;
 
 	queue_me(q, ret, filp);
-	up_read(&current->mm->mmap_sem);
+	up_read(shared);
 
 	/* Now we map fd to filp, so userspace can access it */
 	fd_install(ret, filp);
@@ -2238,7 +2310,8 @@ retry:
 		 */
 		if (!pi) { 
   
 			if (uval & FUTEX_WAITERS)
-				futex_wake((unsigned long __user *)uaddr, 1);
+				futex_wake((unsigned long __user *)uaddr, 1,
+						&curr->mm->mmap_sem);
 		}
 	}
 	return 0;
@@ -2326,13 +2399,18 @@ long do_futex(unsigned long __user *uadd
 	      unsigned long val2, unsigned long val3, int fut64)
 { 
   
 	int ret;
+	int opm = op & FUTEX_CMD_MASK;
+	struct rw_semaphore *shared = NULL;
+
+	if (!(op & FUTEX_PRIVATE_FLAG))
+		shared = &current->mm->mmap_sem;
 
-	switch (op) { 
   
+	switch (opm) { 
   
 	case FUTEX_WAIT:
-		ret = futex_wait(uaddr, val, timeout, fut64);
+		ret = futex_wait(uaddr, val, timeout, fut64, shared);
 		break;
 	case FUTEX_WAKE:
-		ret = futex_wake(uaddr, val);
+		ret = futex_wake(uaddr, val, shared);
 		break;
 	case FUTEX_FD:
 		if (fut64)
@@ -2342,25 +2420,25 @@ long do_futex(unsigned long __user *uadd
 			ret = futex_fd((u32 __user *)uaddr, val);
 		break;
 	case FUTEX_REQUEUE:
-		ret = futex_requeue(uaddr, uaddr2, val, val2, NULL, fut64);
+		ret = futex_requeue(uaddr, uaddr2, val, val2, NULL, fut64, shared);
 		break;
 	case FUTEX_CMP_REQUEUE:
-		ret = futex_requeue(uaddr, uaddr2, val, val2, &val3, fut64);
+		ret = futex_requeue(uaddr, uaddr2, val, val2, &val3, fut64, shared);
 		break;
 	case FUTEX_WAKE_OP:
-		ret = futex_wake_op(uaddr, uaddr2, val, val2, val3, fut64);
+		ret = futex_wake_op(uaddr, uaddr2, val, val2, val3, fut64, shared);
 		break;
 	case FUTEX_LOCK_PI:
-		ret = futex_lock_pi(uaddr, val, timeout, 0, fut64);
+		ret = futex_lock_pi(uaddr, val, timeout, 0, fut64, shared);
 		break;
 	case FUTEX_UNLOCK_PI:
-		ret = futex_unlock_pi(uaddr, fut64);
+		ret = futex_unlock_pi(uaddr, fut64, shared);
 		break;
 	case FUTEX_TRYLOCK_PI:
-		ret = futex_lock_pi(uaddr, 0, timeout, 1, fut64);
+		ret = futex_lock_pi(uaddr, 0, timeout, 1, fut64, shared);
 		break;
 	case FUTEX_CMP_REQUEUE_PI:
-		ret = futex_requeue_pi(uaddr, uaddr2, val, val2, &val3, fut64);
+		ret = futex_requeue_pi(uaddr, uaddr2, val, val2, &val3, fut64, shared);
 		break;
 	default:
 		ret = -ENOSYS;
@@ -2377,23 +2455,24 @@ sys_futex64(u64 __user *uaddr, int op, u
 	struct timespec ts;
 	ktime_t t, *tp = NULL;
 	u64 val2 = 0;
+	int opm = op & FUTEX_CMD_MASK;
 
-	if (utime && (op == FUTEX_WAIT || op == FUTEX_LOCK_PI)) { 
   
+	if (utime && (opm == FUTEX_WAIT || opm == FUTEX_LOCK_PI)) { 
   
 		if (copy_from_user(&ts, utime, sizeof(ts)) != 0)
 			return -EFAULT;
 		if (!timespec_valid(&ts))
 			return -EINVAL;
 
 		t = timespec_to_ktime(ts);
-		if (op == FUTEX_WAIT)
+		if (opm == FUTEX_WAIT)
 			t = ktime_add(ktime_get(), t);
 		tp = &t;
 	}
 	/* - * requeue parameter in 'utime' if op == FUTEX_REQUEUE. + * requeue parameter in 'utime' if opm == FUTEX_REQUEUE. */
-	if (op == FUTEX_REQUEUE || op == FUTEX_CMP_REQUEUE
-	    || op == FUTEX_CMP_REQUEUE_PI)
+	if (opm == FUTEX_REQUEUE || opm == FUTEX_CMP_REQUEUE
+	    || opm == FUTEX_CMP_REQUEUE_PI)
 		val2 = (unsigned long) utime;
 
 	return do_futex((unsigned long __user*)uaddr, op, val, tp,
@@ -2409,23 +2488,24 @@ asmlinkage long sys_futex(u32 __user *ua
 	struct timespec ts;
 	ktime_t t, *tp = NULL;
 	u32 val2 = 0;
+	int opm = op & FUTEX_CMD_MASK;
 
-	if (utime && (op == FUTEX_WAIT || op == FUTEX_LOCK_PI)) { 
   
+	if (utime && (opm == FUTEX_WAIT || opm == FUTEX_LOCK_PI)) { 
   
 		if (copy_from_user(&ts, utime, sizeof(ts)) != 0)
 			return -EFAULT;
 		if (!timespec_valid(&ts))
 			return -EINVAL;
 
 		t = timespec_to_ktime(ts);
-		if (op == FUTEX_WAIT)
+		if (opm == FUTEX_WAIT)
 			t = ktime_add(ktime_get(), t);
 		tp = &t;
 	}
 	/* * requeue parameter in 'utime' if op == FUTEX_REQUEUE. */
-	if (op == FUTEX_REQUEUE || op == FUTEX_CMP_REQUEUE
-	    || op == FUTEX_CMP_REQUEUE_PI)
+	if (opm == FUTEX_REQUEUE || opm == FUTEX_CMP_REQUEUE
+	    || opm == FUTEX_CMP_REQUEUE_PI)
 		val2 = (u32) (unsigned long) utime;
 
 	return do_futex((unsigned long __user*)uaddr, op, val, tp,

-
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/193066.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • PrepareStatement概述

    PrepareStatement概述PrepareStatement概述Statement安全问题Statement执行,其实是拼接sql语句的。先拼接sql语句,然后在一起执行。 Stringsql=”select*fromt_userwhereusername='”+username+”‘andpassword='”+password+”‘”; UserDaodao=ne…

  • windows findstr_windows find

    windows findstr_windows find主要API:FindWindow(LPCTSTRlpClassName,LPCTSTRlpWindowName)//通过进程名得到进程handleSendMessage(UINTmessage,WPARAMwParam=0,LPARAMlParam=0)//Theresultofthemessageprocessing;itsvalue…

  • 深度学习CNN算法原理

    深度学习CNN算法原理深度学习CNN算法原理一卷积神经网络卷积神经网络(CNN)是一种前馈神经网络,通常包含数据输入层、卷积计算层、ReLU激活层、池化层、全连接层(INPUT-CONV-RELU-POOL-FC),是由卷积运算来代替传统矩阵乘法运算的神经网络。CNN常用于图像的数据处理,常用的LenNet-5神经网络模型如下图所示:       该模型由2个卷积层、2个抽样层(池化层)、3个全…

  • 域名系统DNS用来解析_网页域名解析错误怎么办

    域名系统DNS用来解析_网页域名解析错误怎么办1、DNSDNS(DomainNameSystem)是域名系统的英文缩写,是一种组织成域层次结构的计算机和网络服务命名系统,用于TCP/IP网络。2、域名系统DNS的作用通常我们有两种方式识别主机:通过主机名或者IP地址。人们喜欢便于记忆的主机名表示,而路由器则喜欢定长的、有着层次结构的IP地址。为了满足这些不同的偏好,我们就需要一种能够进行主机名到IP地址转换的目录服务,域名系统作为将域名和IP地址相互映射的一个分布式数据库,能够使人更方便地访问互联网。因此,即使不使用域名

    2022年10月15日
  • 版本号命名规则_文件版本号命名规则

    版本号命名规则_文件版本号命名规则版本号的格式为X.Y.Z(又称Major.Minor.Patch),递增的规则为:X表示主版本号,当API的兼容性变化时,X需递增。Y表示次版本号,当增加功能时(不影响API的兼容性),Y需递增。Z表示修订号,当做Bug修复时(不影响API的兼容性),Z需递增。详细的规则如下:X,Y,Z必须为非负整数,且不得包含前导零,必须按数值递增,如1….

  • 如何创建HTML表单?html表单代码怎么写[通俗易懂]

    如何创建HTML表单?html表单代码怎么写[通俗易懂]html表单代码是什么?如何创建HTML表单?这些对于新手会感到陌生,下面我们为你总结一下html表单代码怎么写?以及html表单的创建?一:构建表单标签在文本编辑器中打开HTML文档,必须在<form>和</form>标签中键入HTML表单的内容。这些标签充当表单的容器,就像<div></div>容器标签一样。您可以在<form></form>标签内使用CSS或js,使您的表单看起来比较美观。2.打开<form&gt

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号