We're kicking logcat clients more frequently than expected, so print
this information for debugging purposes.
Bug: 144311420
Test: see these logs
Change-Id: I1570cd4b377a62c863bc26c7b3148e04c2433a9c
logd currently only reports the UID of a log message for 'privileged'
readers (those with a uid or gid of root, system, or log). However,
UIDs are not particularly sensitive. Much more importantly,
non-privileged readers can only see less messages from their own UID,
so this restriction is essentially a no-op, as those readers will
already know their own uid.
Test: liblog and logd unit tests
Change-Id: I9da7d15eb840ba3200128391e70d618eec79f988
- Zero initialize log_time instances by default
- Disable implicit conversions by making constructors explicit
- Explicitly initialize to EPOCH in most places
- Change sniffTime() to avoid in-place modification of a log_time
I stopped here, but we could consider following up with a more invasive
change to make log_time instances immutable and perhaps also remove the
default constructor to force explicit initialization to EPOCH.
Test: atest libbase_test netd_unit_test
Change-Id: I67e716ef74adaaf40ab2c6e2e0dddb8d309bc7ca
LogTimeEntry's lifecycle is spread out in various locations. It
further seems incomplete as there is logic that assumes that its
associated thread can exit while the underlying LogTimeEntry remains
valid, however it doesn't appear that that is actually a supported
situation.
This change simplifies this logic to have only one valid state for a
LogTimeEntry: it must have its thread running and be present in
LastLogTimes. A LogTimeEntry will never be placed into LastLogTimes
unless its thread is running and its thread will remove its associated
LogTimeEntry from LastLogTimes before it has exited.
This admittedly breaks situations where a blocking socket gets issued
multiple commands with different pid filters, tail lines, etc,
however, I'm reasonably sure that these situations were already
broken. A check is added to close the socket in this case.
Test: multiple logcat instances work, logd.reader.per's are cleaned up
Change-Id: Ibe8651e7d530c5e9a8d6ce3150cd247982887cbe
Events in the LogBuffer are supposed to be sorted by timestamp, but for a variety
of reasons that doesn't always happen. When a LogReader is reading from LogBuffer,
LogBuffer starts at the newest event, and scans backward through the list, looking
for the last event. Previously it would accept a couple that were a little bit out
of order, but if it found one that was ancient, it would just bail. This change
removes that check for the ancient messages. They are probably indicative of
something else upstream, but since there is no invariant of the list being sorted,
this change simplifies the search algorithm, and makes it look only at the previous
300 events.
Bug: 77222120
Test: while true ; do frameworks/base/cmds/statsd/run_tests.sh 2h ; done
Change-Id: I0824ee7590d34056ce27233a87cd7802c28f50e4
(cherry pick from commit 22712428b8)
Discovered while running AddressSanitizer, binary events were fed
into logd that were smaller than the binary event string header.
Fix is to check the buffer sizes before performing the memcmp
operation.
Test: compile
Bug: 74574189
Change-Id: Ic01ef6fb0725258d9f39bbdca582ed648a1adc5d
Fix for buffer overrun when a tag that is too big is sent to logd.
Buffer supplied is precisely the right size for max message length
however strlen will be run on the buffer, so need to ensure null
terminator, otherwise any strlen will go off the end of the buffer.
Also converted LogBuffer::Log() over to use the safer strnlen in the
case where it is measuring the buffer (and converted over to using
__android_log_is_loggable_len())
Signed-off-by: Paul Elliott <paul.elliott@arm.com>
Test: liblog.android_log_buf_print__maxtag
Change-Id: I3cb8b25af55943fb0f4658657560eb2300f52961
We have already searched for the start point, the start filter check
is paranoia that removes out-of-order entries that we are undoubtably
interested in. Out-of-order entries occur under reader pressure, as
the writer gets pushed back from in-place sorted order and lands it
at the end for the reader to pick it up. If this occurred during a
batch run or a logger thread wakeup, the entry could be filtered out
and never output to the reader.
We have to treat exact finds for start in the list as terminal when
we search as they represent restarts, depending on the fact that it
is impossible to have the exact same time reported in two log entries
or requested from a batched reader. This does break down if a log
entry has xxxxxx000 nanoseconds reported, we fix that by making sure
we never log such a case and slip it by a ns.
Found one case where logcat.tail_time* tests failed which was fixed
with this adjustment.
Test: gTest logd-unit-tests, liblog-unit-tests and logcat-unit-tests
Bug: 38046067
Bug: 37791296
Bug: 38341453
Change-Id: I4dd2e2596dd67b8d602160dd79661e01805227a9
Regressed by introducing too much overlap in the results.
This reverts commit 982ad208b5.
Bug: 38341453
Change-Id: I9d630a6b9f3e464f523424b640090f7e268da9bd
We have already searched for the start point, the start filter check
is paranoia that removes out-of-order entries that we are undoubtably
interested in. Out-of-order entries occur under reader pressure, as
the writer gets pushed back from in-place sorted order and lands it
at the end for the reader to pick it up. If this occurred during a
batch run or a logger thread wakeup, the entry could be filtered out
and never output to the reader.
Found one case where logcat.tail_time* tests failed which was fixed
with this adjustment.
Test: gTest logd-unit-tests, liblog-unit-tests and logcat-unit-tests
Bug: 38046067
Bug: 37791296
Change-Id: Icbde6b33dca7ab98348c3a872793aeef3997d460
While a reader is present, consider it a success, and not busy, if a
buffer is pruned down to pruneMargin plus one second of additional
margin of logspan. If not busy, no need to trigger any mitigations
regarding the readers, or to report any errors.
Side Effects are we no longer mitigate the reader when performing
chatty filtration. This is a positive side effect because we were
getting --wrap wakeups that seemed premature.
Add kickMe() and isBusy() methods to ease maintenance and uniformity
of actions.
Test: gTest liblog-unit-tests, logd-unit-tests & logcat-unit-tests
Test: manual: 'logcat -b all -c' repeat in a loop, at various logging
load levels, simultaneously 'logcat -b' all in another session.
Bug: 38046067
Change-Id: I3d0c8a2d416a25c45504eda3bfe70b6f6e09ab27
Switch to a reader writer lock for the Element List lock. Also setup
for a reader writer lock for the Times list, but continue to use a
mutex where rdlock() and wrlock() are the same implementation for now.
This should improve general reader performance and prevent blocking of
other reader operations or exit by a single hung logd.reader.per
thread. For example, a full length logcat of an empty buffer (eg:
crash log buffer) will hold a lock while the iterator scans the entire
list.
Test: gTest liblog-unit-tests, logd-unit-tests, logcat-unit-tests
Bug: 37378309
Bug: 37483775
Change-Id: If5723ff4a978e17d828a75321e8f0ba91d4a09e0
Replace stats.add(elem) + stats.subtract(elem) with a new more
efficient method stats.addTotal(elem).
Test: gTest liblog-unit-test, logd-unit-tests and logcat-unit-tests
Bug: 37254265
Change-Id: I2b3c2ac44209772b38f383ae46fe6c4422b542cf
Add checking for impossible(tm) scenarios within LogBuffer::flushTo:
1) When iterating through the log entries, check if the iterator
returns two identical element references and break out of the loop.
2) Cap the maximum number of log entries we will skip while holding
the iterator lock at 4194304, break out of the loop.
We print a message to the kernel logs if we hit these cases.
ToDo: Remove this paranoia at some future date.
Test: gTest liblog-unit-tests logcat-unit-tests and logd-unit-tests
Bug: 37378309
Change-Id: I789594649db14093238828b9f6d1daeca8b780c2
Deal with a regression introduced in commit
5a34d6ea43 (logd: drop mSequence from
LogBufferElement) where log_time was compared against nsec() time
miscalculating the watermark boundary. When dealing with logcat
-t/-T, or any tail reading, add a margin to prune to back off by a
period of 3 seconds (pruneMargin).
Test: gTest liblog-unit-tests logcat-unit-tests and logd-unit-tests
Bug: 37378309
Change-Id: I72ea858e4e7b5fa91741ea84c40d2e7c3c4aa031
Reduce the period we are willing to look back at for out-of-order
entries. Cap the number of iterations we are willing to look back
for out-of-order entries to 300.
Test: gTest liblog-unit-tests, logd-unit-tests and logcat-unit-tests
Bug: 36875387
Bug: 36874561
Bug: 36861142
Change-Id: Icee289dfc0a37ccab9912dc8ab40a10ef3967b7a
Move lastTid array from local in LogBuffer::flushTo to per-reader
context in LogTimes::mLastTid and pass into LogBuffer::flushTo.
Replace NULL with nullptr in touched files.
Simplify LogTimeEntry::cleanSkip_Locked initialization of skipAhead
to memset, to match mLastTid memset initialization.
Test: gTest liblog-unit-tests, logd-unit-tests & logcat-unit-tests
Test: adb logcat -b all | grep chatty | grep -v identical
Bug: 36488201
Change-Id: I0c3887f220a57f80c0490be4b182657b9563aa3f
last should start with mLogElements.end() and be updated as
we iterate to find a matching time entry in the list. Since
it is impossible(sic) for a newer start time to be supplied
than the list, the incorrect iterator initialization should
be inconsequential, but if it ever happens this change will
behave correctly and dump nothing.
Test: gTest liblog-unit-tests, logd-unit-tests and logcat-unit-tests
Bug: 36536248
Bug: 36608728
Change-Id: I96998c4b713258f29d5db2e24a83ae562ddf3420
A mixture of fixes and cleanup for LogKlog.cpp and friends.
- sscanf calls strlen. Check if the string is missing a nul
terminator, if it is, do not call sscanf.
- replace NULL with nullptr for stronger typechecking.
- pass by reference for simpler code.
- Use ssize_t where possible to check for negative values.
- fix FastCmp to add some validity checking since ASAN reports that
callers are not making sure pre-conditions are met.
- add fasticmp templates for completeness.
- if the buffer is too small to contain a meaningful time, do not
call down to log_time::strptime() because it does not limit its
accesses to the buffer boundaries, instead stopping at a
terminating nul or invalid match.
- move strnstr to LogUtils.h, drop size checking of needle and
clearly report the list of needles used with android::strnstr
- replace 'sizeof(static const char[]) - 1' with strlen.
Test: gTest liblog-unit-test, logd-unit-tests & logcat-unit-tests
Bug: 30792935
Bug: 36536248
Bug: 35468874
Bug: 34949125
Bug: 34606909
Bug: 36075298
Bug: 36608728
Change-Id: I161bf03ba029050e809b31cceef03f729d318866
Add some deterministic behavior should the user change the hour
backwards when altering the device time, prevent sort-in-place
and cause the logger to land the new entries at the end.
Do not limit how far kernel logs can be sorted.
Test: gTest liblog-unit-tests logd-unit-tests logcat-unit-tests
Bug: 35373582
Change-Id: Ie897c40b97adf1e3996687a0e28c1199c41e0d0c
Regression from commit 8e8e8db549
For liblogcat reader -t or -T <timestamp> tail requests, continue
search for pertinent out-of-order entries for an additional 30 seconds
back into logging history to find a more inclusive starting point.
For example, if you have an out of order landing like
[..., 3, 6, 1, 8, 2, 5] and ask for 3 you used to get only 5, and now
you get 3, 6, 8, 5 as 'expected'
Test: gTest liblog-unit-tests logd-unit-tests logcat-unit-tests
Bug: 35373582
Change-Id: I2a0732933fa371aed383d49c8d48d01f33db2a79
Use getRealTime() instead and leverage private liblog log_time
comparison and math functions. This saves 8 bytes off each
element in the logging database.
Test: gTest liblog-unit-tests logd-unit-tests logcat-unit-tests
Bug: 35373582
Change-Id: Ia55ef8b95cbb2a841ccb1dae9a24f314735b076a
- Improves accuracy of -t/-T '<timestamp>' behavior when out of order
arrival of entries messes with mSequence as the list will now have
monotonic sequence numbers enforced.
- Out of order time entries still remain because of reader requiring
the ability to receive newly arrived old entries.
- -t/-T '<timestamp>' can still quit backward search prematurely
because an old entry lands later in the list.
- Adjust insert in place algorithm from two loops of scan placement
and then limit against watermark, into one that does all of that
plus iteratively swap update the sequence numbers to set
monotonicity. Side effect will be that the read lock (which is
actually the LogTimes lock) will be held longer while we search
for a placement above the youngest LogTimes watermark. We need
to hold the read (LogTimes) lock because we may be altering the
sequence numbers affecting -t/-T '<timestamp>' search.
Test: gTest logd-unit-tests liblog-unit-tests logcat-unit-tests
Bug: 35373582
Change-Id: I79a385fc149bac2179128b53d4c8f71e429181ae
Bug: 34949125
Bug: 34606909
Test: Make sure Android boots when built with SANITIZE_TARGET='address'
Change-Id: I9c004e806f2025098aa72228284b05affd2c2802
Switch _all_ file's coding style to match to ease all future changes.
SideEffects: None
Test: compile
Bug: 35373582
Change-Id: I470cb17f64fa48f14aafc02f574e296bffe3a3f3
Bug: 34949125
Bug: 34606909
Test: Make sure Android boots when built with SANITIZE_TARGET='address'
Change-Id: I9c004e806f2025098aa72228284b05affd2c2802
Will register a new event tag by name and format, and return an
event-log-tags format response with the newly allocated tag.
If format is not specified, then nothing will be recorded, but
a pre-existing named entry will be listed. If name and format are
not specified, list all dynamic entries. If name=* list all
event log tag entries.
Stickiness through logd crash will be managed with the tmpfs file
/dev/event-log-tags and through a reboot with add_tag entries in
the pmsg last logcat event log. On debug builds we retain a
/data/misc/logd/event-log-tags file that aids stickiness and that
can be picked up by the bugreport.
If we detect truncation damage to /dev/event-log-tags, or to
/data/misc/logd/event-log-tags, rebuild file with a new first line
signature incorporating the time so mmap'd readers of the file can
detect the possible change in shape and order.
Manual testing:
Make sure nc (netcat) is built for the target platform on the host:
$ m nc
Then the following can be used to issue a request on the platform:
$ echo -n 'getEventTag name=<name> format="<format>"\0EXIT\0' |
> nc -U /dev/socket/logd
Test: gTest logd-unit-test --gtest_filter=getEventTag*
Bug: 31456426
Change-Id: I5dacc5f84a24d52dae09cca5ee1a3a9f9207f06d
Report multiple identical chatty messages differently than for
regular expire chatty messages. Multiple identical will
report identical count, while spam filter will report
expire count.
This should reduce the expected flood of people confused
but chatty messages in continuous logcat output.
Test: gTest logd_unit_tests --gtest_filter=logd.multiple*
Change-Id: Iad93d3efc6a3938a4b87ccadddbd86626a015d44
As an extension to the duplicate multiple message filtering, special
case liblog tagged event messages to be summed. This solves the
inefficient and confusing duplicate message report from the DOS attack
detection such as:
liblog: 2
liblog: 2
liblog: 2
liblog: 2
liblog: 3
which would result in:
liblog: 2
chatty: ... expire 2 lines
liblog: 2
liblog: 3
And instead sums them and turns them all into:
liblog: 11
liblog messages should never be subject to chatty conversion.
Test: liblog-benchmarks manually check for coalesced liblog messages
and make sure they do not turn into chatty messages.
Instrumented code to capture sum intermediates to be sure.
Bug: 33535908
Change-Id: I3bf03c4bfa36071b578bcd6f62234b409a91184b
Inspection turned up that for the case of three identical messages,
the result would be a stutter of the first message only. Added
comments to describe the state machine, incoming variables, outcoming
and false condition outputs, for proper maintenance in the future.
Test: gTest liblog-benchmarks BM_log_maximum* and manually check
for correct midstream chatty messages,
Bug: 33535908
Change-Id: I852260d18a484e6207b80063159f1a74eaa83b55
If a series of messages arrive from a single source with identical
message content payload, then suppress them and generate a chatty
report. The checking is done on a per log id basis.
This alters the assumption that chatty messages are always at the
oldest entries, they now show up in the middle too. To address this
change in behavior we print the first line, a chatty reference
which internally takes little space, then the last line in the series.
This does not conserve processing time in logd, and certainly has no
impact on the long path of formatting and submitting log messages from
from the source, but it may contribute to memory space and signal to
noise savings under heavy spammy loads.
Test: gTest liblog-unit-tests, logd-unit-tests & logcat-unit-tests
Bug: 33535908
Change-Id: I3160c36d4f4e2f8216f528605a1b3993173f4dec
getTag() becomes invalid when entry is dropped because mMsg
disappears to save space; but the per-tag spam filter depends on it
still being valid. Conserve space in LogBufferElement by optimizing
the size of the fields, then add a new mTag field that is set in the
object constructor. Add an isBinary() method.
SideEffects: save 12 bytes/log message overhead on 64-bit.
Test: define DEBUG_CHECK_FOR_STALE_ENTRIES and look for stale entries
Bug: 32247044
Change-Id: Iaa5f416718a92c9e0e6ffd56bd5260d8b908d5c0