platform_bionic/libc/arch-arm/krait/bionic
Christopher Ferris 7c860db074 Optimize __memset_chk, __memcpy_chk.
This change creates assembler versions of __memcpy_chk/__memset_chk
that is implemented in the memcpy/memset assembler code. This change
avoids an extra call to memcpy/memset, instead allowing a simple fall
through to occur from the chk code into the body of the real
implementation.

Testing:

- Ran the libc_test on __memcpy_chk/__memset_chk on all nexus devices.
- Wrote a small test executable that has three calls to __memcpy_chk and
  three calls to __memset_chk. First call dest_len is length + 1. Second
  call dest_len is length. Third call dest_len is length - 1.
  Verified that the first two calls pass, and the third fails. Examined
  the logcat output on all nexus devices to verify that the fortify
  error message was sent properly.
- I benchmarked the new __memcpy_chk and __memset_chk on all systems. For
  __memcpy_chk and large copies, the savings is relatively small (about 1%).
  For small copies, the savings is large on cortex-a15/krait devices
  (between 5% to 30%).
  For cortex-a9 and small copies, the speed up is present, but relatively
  small (about 3% to 5%).
  For __memset_chk and large copies, the savings is also small (about 1%).
  However, all processors show larger speed-ups on small copies (about 30% to
  100%).

Bug: 9293744

Change-Id: I8926d59fe2673e36e8a27629e02a7b7059ebbc98
2013-08-06 15:38:29 -07:00
..
memcpy.S Optimize __memset_chk, __memcpy_chk. 2013-08-06 15:38:29 -07:00
memset.S Optimize __memset_chk, __memcpy_chk. 2013-08-06 15:38:29 -07:00
strcmp.S Create arch specific versions of strcmp. 2013-03-12 10:25:11 -07:00