Official Android SDK for Stream Video
This is the official Android SDK for Stream Video, a service for building video calls, audio rooms, and live-streaming applications. This library includes both a low-level video SDK and a set of reusable UI components. Most users start with the Compose UI components and fall back to the lower-level API when they want to customize things.
๐ฅ What is Stream?
Stream allows developers to rapidly deploy scalable feeds, chat messaging and video with an industry leading 99.999% uptime SLA guarantee.
Stream provides UI components and state handling that make it easy to build video calling for your app. All calls run on Stream's network of edge servers around the world, ensuring optimal latency and reliability.
๐ Tutorials
With Stream's video components, you can use their SDK to build in-app video calling, audio rooms, audio calls, or live streaming. The best place to get started is with their tutorials:
If you're interested in customizing the UI components for the Video SDK, check out the UI Cookbook.
๐ฑ Previews
๐ Sample Projects
You can find sample projects below that demonstrates use cases of Stream Video SDK for Android:
- Dogfooding: Dogfooding demonstrates Stream Video SDK for Android with modern Android tech stacks, such as Compose, Hilt, and Coroutines.
- Meeting Room Compose: A real-time meeting room app built with Jetpack Compose to demonstrate video communications.
๐ฉโ๐ป Free for Makers ๐จโ๐ป
Stream is free for most side and hobby projects. To qualify, your project/company needs to have < 5 team members and < $10k in monthly revenue. Makers get $100 in monthly credit for video for free. For more details, check out the Maker Account.
๐กSupported Features๐ก
Here are some of the features we support:
- Developer experience: Great SDKs, docs, tutorials and support so you can build quickly
- Edge network: Servers around the world ensure optimal latency and reliability
- Chat: Stored chat, reactions, threads, typing indicators, URL previews etc
- Security & Privacy: Based in USA and EU, Soc2 certified, GDPR compliant
- Dynascale: Automatically switch resolutions, fps, bitrate, codecs and paginate video on large calls
- Screensharing
- Picture in picture support
- Active speaker
- Custom events
- Geofencing
- Notifications and ringing calls
- Opus DTX & Red for reliable audio
- Webhooks & SQS
- Backstage mode
- Flexible permissions system
- Joining calls by ID, link or invite
- Enabling and disabling audio and video when in calls
- Flipping, Enabling and disabling camera in calls
- Enabling and disabling speakerphone in calls
- Push notification providers support
- Call recording
- Broadcasting to HLS
๐บ๏ธ Roadmap
Video roadmap and changelog is available here.
0.2.0 milestone
- Publish app on play store
- Example Button to switch speakerphone/earpiece (Jaewoong)
- Chat Integration (Jaewoong)
- Automatically handle pagination and sorting on > 6 participants in the sample app (Daniel)
- Buttons to simulate ice restart and SFU switching (Jaewoong)
- Bug: java.net.UnknownHostException: Unable to resolve host "hint.stream-io-video.com" isn't throw but instead logged as INFO (Daniel)
- Bug: Call.join will throw an exception if error is other than HttpException
- Report version number of SDK on all API calls (Daniel)
- Bug: screensharing is broken. android doesnโt receive/render (not sure) the screenshare. video shows up as the gray avatar
- support settings.audio.default_device (Daniel)
- Bug: Sample app has a bug where we don't subscribe to call changes, we need to use call.get in the preview screen so we know the number of participants (Daniel)
- Buttons to simulate ice restart and SFU switching (Jaewoong)
- Local Video disconnects sometimes (ICE restarts issue for the publisher. we're waiting for the backend support) (Thierry)
- Deeplink support for video call demo & dogfooding app (skip auth for the video demo, keep it for dogfooding) (Jaewoong)
- XML version of VideoRenderer (Jaewoong)
- sortedParticipants stateflow doesn't update accurately (Thierry)
- Reactions
- Bug: screenshare is not removed after it stops when a participant leaves the call (Thierry) (probably just dont update the state when the participant leaves)
- Speaking while muted stateflow (Daniel)
- Bluetooth reliability
- Cleanup the retry behaviour in the RtcSession
- SDK development guide for all teams
0.3.0 milestone
- Finish usability testing with design team on chat integration (Jaewoong)
- Pagination on query members & query call endpoints (Daniel)
- Livestream tutorial (depends on RTMP support) (Thierry)
- local version of audioLevel(s) for lower latency audio visualizations(Daniel)
- Complete integration with the video demo flow
- Pagination on query members & query call endpoints (Daniel)
- Enable ice restarts for publisher and subscriber
- Ringing: Finish it, make testing easy and write docs for common changes (Daniel)
- Bug: Screensharing on Firefox has some issues when rendering on android (Daniel)
- Bug: Screensharing scaling and zoom doesn't work (Daniel)
0.4.0 milestone
- Screensharing from mobile
- Picture of the video stream at the highest resolution + docs on how to add a button for this (Daniel)
- Audio & Video filters. Support (Daniel)
- Implement Chat overlay for Dogfooding (Jaewoong)
- Migrate Stream Chat SDK v6 stable (Jaewoong)
- Add Dogfooding instructions + directs Google Play (Jaewoong)
- Default livestream player UI + docs (Jaewoong/ Daniel)
- Reaction dialog API for Compose (Jaewoong)
- Android SDK development.md cleanup (Daniel)
- Upgrade to more recent versions of webrtc (Kanat)
- Review foreground service vs backend for audio rooms etc. (Daniel)
- Support participant.custom field which was previously ignored. ParticipantState line 216 (Daniel)
- Logging is too verbose (rtc is very noisy), clean it up to focus on the essential for info and higher (Daniel)
0.5.0 milestone
- Enable SFU switching
- H264 workaround on Samsung 23? (see https://github.com/livekit/client-sdk-android/blob/main/livekit-android-sdk/src/main/java/io/livekit/android/webrtc/SimulcastVideoEncoderFactoryWrapper.kt#L34 and
- react-native-webrtc/react-native-webrtc#983 (comment))
- Test coverage
- Dynascale 2.0 (depending backend support)
- Testing on more devices
- Camera controls
- Tap to focus
Dynascale 2.0
- currently we support selecting which of the 3 layers you want to send: f, h and q. in addition we should support:
- changing the resolution of the f track
- changing the codec that's used from VP8 to h264 or vice versa
- detecting when webrtc changes the resolution of the f track, and notifying the server about it (if needed)
๐ผ We are hiring!
We've recently closed a $38 million Series B funding round and we keep actively growing. Our APIs are used by more than a billion end-users, and you'll have a chance to make a huge impact on the product within a team of the strongest engineers all over the world. Check out our current openings and apply via Stream's website.
License
Copyright (c) 2014-2023 Stream.io Inc. All rights reserved.
Licensed under the Stream License;
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://github.com/GetStream/stream-video-android/blob/main/LICENSE
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.