Skip to content

Fix high CPU utilization regression on event streaming #3318

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 28, 2025

Conversation

SergeyRyabinin
Copy link
Contributor

@SergeyRyabinin SergeyRyabinin commented Feb 27, 2025

Issue #, if available:
introduced by #3302
The CPU utilization got back to 100% for a streaming because of input stream polling

Description of changes:
Introduce another stream special value int returned by peak, with a value of amzn, to represent the condition when: we have nothing to send AND session is still open.
Regular EOF aka -1 will also always close the stream. 'a' and 'z' are also invalid values because they can also represent a valid byte from the stream.

tested by running streaming session and observing htop

Check all that applies:

  • Did a review by yourself.
  • Added proper tests to cover this PR. (If tests are not applicable, explain.)
  • Checked if this PR is a breaking (APIs have been changed) change.
  • Checked if this PR will not introduce cross-platform inconsistent behavior.
  • Checked if this PR would require a ReadMe/Wiki update.

Check which platforms you have built SDK on to verify the correctness of this PR.

  • Linux
  • Windows
  • Android
  • MacOS
  • IOS
  • Other Platforms

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@SergeyRyabinin SergeyRyabinin marked this pull request as ready for review February 28, 2025 00:02
@@ -357,6 +357,14 @@ HttpResponseOutcome AWSClient::AttemptExhaustively(const Aws::Http::URI& uri,
{
break;
}
if (request.IsEventStreamRequest() &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also extend this change to the smithy client else pending fixes will pile up

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to have a completely different design of streaming for the smithy client.
also smithy client does not use for loop for retry logic.

@@ -15,10 +15,11 @@ namespace Aws
namespace Stream
{
const char TAG[] = "ConcurrentStreamBuf";

const int ConcurrentStreamBuf::noData = ((((('n' << 8) | 'z') << 8) | 'm') << 8) | 'a';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is 'n' and 'm' evaluated ? I see that 'z' and 'a' states are treated now as same state when try lock can't be achieved or m_getArea is empty

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ConcurrentStreamBuf::noData is a magic variable with a value of 1634564718
1634564718 represents an integer where the bytes are 'a', 'm', 'z', and 'n'. Just a bit of fun. I thought about also using leetspeak, but decided to use something that we can claim as a reserved variable/keyword.
As mentioned on the comment, it is a

A flag returned by underflow() if there is no data available at the moment but stream must not be closed yet.

We can't return -1 - it is already reserved for eof.
We can't return single byte - it will be treated as a valid user value from the stream.
But we still can return some int.
Honestly, it is still kind of an "implementation defined hack", but it works.

@sbera87 sbera87 self-requested a review February 28, 2025 18:57
@SergeyRyabinin SergeyRyabinin merged commit f0ba75b into main Feb 28, 2025
3 of 4 checks passed
@SergeyRyabinin SergeyRyabinin deleted the sr/transcribe_fix_last_final_fix_final_v2b branch February 28, 2025 18:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants