Showing posts with label Android. Show all posts

Guidance to developers affected by our effort to block less secure browsers and applications

Posted by Lillan Marie Agerup, Product Manager

We are always working to improve security protections of Google accounts. Our security systems automatically detect, alert and help protect our users against a range of security threats. One form of phishing, known as “man-in-the-middle”, is hard to detect when an embedded browser framework (e.g., Chromium Embedded Framework - CEF) or another automation platform is being used for authentication. MITM presents an authentication flow on these platforms and intercepts the communications between a user and Google to gather the user’s credentials (including the second factor in some cases) and sign in. To protect our users from these types of attacks Google Account sign-ins from all embedded frameworks will be blocked starting on January 4, 2021. This block affects CEF-based apps and other non-supported browsers.

To minimize the disruption of service to our partners, we are providing this information to help developers set up OAuth 2.0 flows in supported user-agents. The information in this document outlines the following:

  • How to enable sign-in on your embedded framework-based apps using browser-based OAuth 2.0 flows.
  • How to test for compatibility.

Apps that use embedded frameworks

If you're an app developer and use CEF or other clients for authorization on devices, use browser-based OAuth 2.0 flows. Alternatively, you can use a compatible full native browser for sign-in.

For limited-input device applications, such as applications that do not have access to a browser or have limited input capabilities, use limited-input device OAuth 2.0 flows.

Browsers

Modern browsers with security updates will continue to be supported.

Browser standards

The browser must have JavaScript enabled. For more details, see our previous blog post.

The browser must not proxy or alter the network communication. Your browser must not do any of the following:

  • Server-side rendering
  • HTTPS proxy
  • Replay requests
  • Rewrite HTTP headers

The browser must have a reasonably complete implementation of web standards and browser features. You must confirm that your browser does not contain any of the following:

  • Headless browsers
  • Node.js
  • Text-based browsers

The browser must identify itself clearly in the User-Agent. The browser must not try to impersonate another browser like Chrome or Firefox.

The browser must not provide automation features. This includes scripts that automate keystrokes or clicks, especially to perform automatic sign-ins. We do not allow sign-in from browsers based on frameworks like CEF or Embedded Internet Explorer.

Test for compatibility

If you're a developer that currently uses CEF for sign-in, be aware that support for this type of authentication ends on January 4, 2021. To verify whether you'll be affected by the change, test your application for compatibility. To test your application, add a specific HTTP header and value to disable the allowlist. The following steps explain how to disable the allowlist:

  1. Go to where you send requests to accounts.google.com.
  2. Add Google-Accounts-Check-OAuth-Login:true to your HTTP request headers.

The following example details how to disable the allowlist in CEF.

Note: You can add your custom headers in CefRequestHandler#OnBeforeResourceLoad.

    CefRequest::HeaderMap hdrMap;
    request->GetHeaderMap(hdrMap);
    hdrMap.insert(std::make_pair("Google-Accounts-Check-OAuth-Login", "true"));

To test manually in Chrome, use ModHeader to set the header. The header enables the changes for that particular request.

Setting the header using ModHeader

Related content

See our previous blog post about protection against man-in-the-middle phishing attacks.

ML Kit Pose Detection Makes Staying Active at Home Easier

Posted by Kenny Sulaimon, Product Manager, ML Kit; Chengji Yan and Areeba Abid, Software Engineers, ML Kit

ML Kit logo

Two months ago we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then we’ve launched the Digital Ink Recognition API, and also introduced the ML Kit early access program. Our first two early access APIs were Pose Detection and Entity Extraction. We’ve received an overwhelming amount of interest in these new APIs and today, we are thrilled to officially add Pose Detection to the ML Kit lineup.

ML Kit Overview

A New ML Kit API, Pose Detection


Examples of ML Kit Pose Detection

ML Kit Pose Detection is an on-device, cross platform (Android and iOS), lightweight solution that tracks a subject's physical actions in real time. With this technology, building a one-of-a-kind experience for your users is easier than ever.

The API produces a full body 33 point skeletal match that includes facial landmarks (ears, eyes, mouth, and nose), along with hands and feet tracking. The API was also trained on a variety of complex athletic poses, such as Yoga positions.

Skeleton image detailing all 33 landmark points

Skeleton image detailing all 33 landmark points

Under The Hood

Diagram of the ML Kit Pose Detection Pipeline

The power of the ML Kit Pose Detection API is in its ease of use. The API builds on the cutting edge BlazePose pipeline and allows developers to build great experiences on Android and iOS, with little effort. We offer a full body model, support for both video and static image use cases, and have added multiple pre and post processing improvements to help developers get started with only a few lines of code.

The ML Kit Pose Detection API utilizes a two step process for detecting poses. First, the API combines an ultra-fast face detector with a prominent person detection algorithm, in order to detect when a person has entered the scene. The API is capable of detecting a single (highest confidence) person in the scene and requires the face of the user to be present in order to ensure optimal results.

Next, the API applies a full body, 33 landmark point skeleton to the detected person. These points are rendered in 2D space and do not account for depth. The API also contains a streaming mode option for further performance and latency optimization. When enabled, instead of running person detection on every frame, the API only runs this detector when the previous frame no longer detects a pose.

The ML Kit Pose Detection API also features two operating modes, “Fast” and “Accurate”. With the “Fast” mode enabled, you can expect a frame rate of around 30+ FPS on a modern Android device, such as a Pixel 4 and 45+ FPS on a modern iOS device, such as an iPhone X. With the “Accurate” mode enabled, you can expect more stable x,y coordinates on both types of devices, but a slower frame rate overall.

Lastly, we’ve also added a per point “InFrameLikelihood” score to help app developers ensure their users are in the right position and filter out extraneous points. This score is calculated during the landmark detection phase and a low likelihood score suggests that a landmark is outside the image frame.

Real World Applications


Examples of a pushup and squat counter using ML Kit Pose Detection

Keeping up with regular physical activity is one of the hardest things to do while at home. We often rely on gym buddies or physical trainers to help us with our workouts, but this has become increasingly difficult. Apps and technology can often help with this, but with existing solutions, many app developers are still struggling to understand and provide feedback on a user’s movement in real time. ML Kit Pose Detection aims to make this problem a whole lot easier.

The most common applications for Pose detection are fitness and yoga trackers. It’s possible to use our API to track pushups, squats and a variety of other physical activities in real time. These complex use cases can be achieved by using the output of the API, either with angle heuristics, tracking the distance between joints, or with your own proprietary classifier model.

To get you jump started with classifying poses, we are sharing additional tips on how to use angle heuristics to classify popular yoga poses. Check it out here.

Learning to Dance Without Leaving Home

Learning a new skill is always tough, but learning to dance without the aid of a real time instructor is even tougher. One of our early access partners, Groovetime, has set out to solve this problem.

With the power of ML Kit Pose Detection, Groovetime allows users to learn their favorite dance moves from popular short-form dance videos, while giving users automated real time feedback on their technique. You can join their early access beta here.

Groovetime App using ML Kit Pose Detection

Staying Active Wherever You Are

Our Pose Detection API is also helping adidas Training, another one of our early access partners, build a virtual workout experience that will help you stay active no matter where you are. This one-of-a-kind innovation will help analyze and give feedback on the user’s movements, using nothing more than just your phone. Integration into the adidas Training app is still in the early phases of the development cycle, but stay tuned for more updates in the future.

How to get started?

If you would like to start using the Pose Detection API in your mobile app, head over to the developer documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Helping the Haitian economy, one line of code at a time

Posted by Jennifer Kohl, Program Manager, Developer Community Programs

Picture

Eustache Luckens Yadley at a GDG Port-au-Prince meetup

Meet Eustache Luckens Yadley, or “Yadley” for short. As a web developer from Port-au-Prince, Yadley has spent his career building web applications that benefit the local Haitian economy. Whether it’s ecommerce platforms that bring local sellers to market or software tools that help local businesses operate more effectively, Yadley has always been there with a technical hand to lend.

However, Yadley has also spent his career watching Haiti’s unemployment numbers rise to among the highest in the Caribbean. As he describes it,


“Every day, several thousand young people have no job to get by.”


So with code in mind and mouse in hand, Yadley got right to work. His first step was to identify a need in the economy. He soon figured out that Haiti had a shortage of delivery methods for consumers, making home delivery purchases of any kind extremely unreliable. With this observation, Yadley also noticed that there was a surplus of workers willing to deliver the goods, but no infrastructure to align their needs with that of the market’s.

picture

Yadley watching a demo at a GDG Port-au-Prince meetup

In this moment, Yadley did what many good developers would do: build an app. He created the framework for what is now called “Livrezonpam,” an application that allows companies to post where and when they need a particular product delivered and workers to find the corresponding delivery jobs closest to them.

With a brilliant solution, Yadley’s last step was to find the right technical tools to build the concept out and make it a viable platform that users could work with to their benefit.

It was at this crucial step when Yadley found the Port-au-Prince Google Developer Group. With GDG Port-au-Prince, Yadley was able to bring his young app right into the developer community, run different demos of his product to experienced users, and get feedback from a wide array of developers with an intimate knowledge of the Haitian tech scene. The takeaways from working in the community translated directly to his work. Yadley learned how to build with the Google Cloud Platform Essentials, which proved key in managing all the data his app now collects. He also learned how to get the Google Maps Platform API working for his app, creating a streamlined user experience that helped workers and companies in Haiti locate one another with precision and ease.

picture

This wide array of community technical resources, from trainings, to mentors, to helpful friends, allowed Yadley to grow his knowledge of several Google technologies, which in turn allowed him to grow his app for the Haitian community.

Today, Yadley is still an active member of the GDG community, growing his skills and those of the many friends around him. And at the same time, he is still growing Librezonpam on the Google Play App Store to help local businesses reach their customers and bring more jobs directly to the people of Haiti.


Ready to start building with a Google Developer Group near you? Find the closest community to you, here.

Announcing Jetpack Compose Alpha!

Posted by Karen Ng, Director, Product Management

Today, we’re releasing the alpha of Jetpack Compose, our modern UI toolkit designed to help you quickly and easily build beautiful apps across all Android platforms, with native access to the platform APIs. Bring your app to life with dramatically less code, interactive tools, and intuitive Kotlin APIs.

No matter where you’re working from -- whether it’s your kitchen table or an office, we know you need a programming language, an IDE and a powerful UI framework that can save you time and reduce how much code you need to write. So we built Jetpack Compose to make you (and us!) more productive with building UI.

We started with Android Jetpack — taking the hardest, most common developer problems on Android and creating a suite of libraries that ensure high quality apps that work across all versions of the platform. Today, 84% of the top 10,000 apps in the Play store are using a Jetpack library.

Then we heard how developers love Kotlin, with over 70% of the top 1000 apps and 60% of pro Android developers using Kotlin today. The Google Home app saw, in certain instances, an 80% reduction in lines of code by using Kotlin and a decrease of NullPointerExceptions by 33% compared to a similar past period. Duolingo, saw reduced line count by an average of 30%.

Finally, we heard strong feedback from the community that developers like the simplicity of declarative APIs for building UI. Jetpack Compose combines all three of these: APIs for high quality apps at scale, an intuitive language, and a reactive programming model.

Jetpack

Jetpack Compose: Now in Alpha

Jetpack Compose Alpha has what you need to build full-fledged Android apps, including powerful tools and interoperability with existing Android views so you don’t need to rewrite your app. Compose APIs are designed and developed hand-in-hand with a set of canonical sample apps that use Material Design that we’re excited to release today! You can import and explore the latest samples directly in Android Studio as well.

compose

The alpha release includes:

  • Animations
  • Constraint Layout
  • Initial A11Y support
  • Input and Gestures
  • Interoperability with Views (start mixing Composable functions in your existing app)
  • Lazy Lists
  • Material UI components
  • Performance optimizations
  • Testing
  • Text and editable Text
  • Theming and Graphics
  • Window management

We've also added a number of new capabilities to Android Studio 4.2 canary, in close partnership with the JetBrains Kotlin team, to help you build apps with Compose:

  • Compose Code completion
  • Compose Preview Annotations
  • Deploy individual composables to any device
  • Interactive Compose previews
  • Kotlin compiler plugin for code generation
  • Sample Data API for Compose

Thinking in Compose

Compose uses a programming model that is quite different from the existing model of building UI on Android. Historically, an Android view hierarchy has been represented as a tree of UI widgets. As the state of the app changes, the UI hierarchy needs to be updated to display the current data. The most common way of updating the UI is to walk the tree using functions like findViewById(), and change nodes by calling methods like:

 button.setText(String) 
container.addView(View) 
 img.setImageBitmap(Bitmap) 
These methods change the internal state of the widget. Not only can this be tedious, but updating views manually increases the likelihood of errors (e.g. forgetting to update a view).

Jetpack Compose is a fully declarative component-based approach, meaning you describe your UI as functions that transform data into a UI hierarchy. When the underlying data changes, the Compose framework automatically updates the UI hierarchy for you, making it simple to build UIs easily and quickly.

Full interop with existing Android views

Adopting any new framework is a big change for existing projects and codebases, which is why we’ve designed Compose to be as easy to adopt as Kotlin — it is fully interoperable with existing Android code, from day one.

Migrating to Compose depends on you and your team. If you're building a new app, the best option might be to implement your entire UI with Compose. We know that most of you have large existing codebases, so rather than rewriting your app, you can combine Compose with your existing UI design.

There are two main ways you can combine Compose with a view-based UI:

  • You can add Compose elements into your existing UI, either by creating an entirely new Compose-based screen, or by adding Compose elements into an existing fragment or view layout.
  • You can add a view-based UI element into your composable functions. Doing so lets you add non-Compose widgets, such as MapView or WebView, into a Compose-based design.

We’ve also published a new library, MDC Compose Theme Adapter, which allows you to reuse your existing Material Components themes within your Compose UI.

To learn more, try the Compose for existing apps codelab or check out these two samples:

  • Tivi and Sunflower are existing apps which are being integrated with Compose
  • Crane sample app, embeds a MapView in Compose

Powerful Tools

Jetpack Compose is built with powerful tooling in Android Studio, designed to help you iterate quickly on the piece of UI you’re working on.

The Compose layout preview enables you to preview your Compose components without having to deploy your app to a device or emulator. As you develop your app, your previews update to help you review your changes faster. To create a layout preview, write a composable function that does not take any parameters, and add the

 @Preview annotation 
After you build your app, the preview function's UI appears in Studio's Preview pane.

Jetpack

Android Studio provides an interactive preview mode. While you're in interactive preview mode, you can click or type in your UI elements, and the UI responds as if it were in the installed app.

Jetpack

You can also deploy a single composable to your physical device or Android Emulator. Android Studio creates a new activity containing the UI generated by that function, and deploys it to your app on the device. This lets you try out the UI on an actual device without needing to reinstall the entire app or navigate to its location.

Jetpack

Get started with Jetpack Compose

To get started with Jetpack Compose, try the Compose Tutorial and get setup. Or dive right into the sample apps and walk through those apps in ‘Compose by Example’:

To find a comprehensive set of Compose resources, from new codelabs and expanded documentation, see the Compose pathway.

Since we open-sourced Jetpack Compose last year, so many of you have given us invaluable feedback, logged bugs, or contributed CLs and have gotten us where we are today. Thank you!

Compose isn’t recommended for full production use yet, in particular as we work towards API stability and finish performance optimizations, but we’d love you to give it a try and share feedback. Join us in the discussion on the #compose channel at Kotlin Slack. Compose 1.0 is expected in 2021.

Happy Composing!

New ways to reach more drivers on Android for cars

Posted by Mickey Kataria, Director of Product Management, Android for cars

This blog post is part of a weekly series for #11WeeksOfAndroid. For each week, we’re diving into a key area and this week we’re focusing on Android Beyond Phones. Today, we’ll be talking about cars.

Since 2014, Google has been committed to bringing the familiarity of apps and services from Android phones into the car in a safe and seamless way. We’re continuing to see strong momentum and adoption of both Android Auto and Android Automotive OS, and are excited to share new improvements that provide app developers the opportunity to reach more users in the car.

Android Auto momentum

We launched Android Auto for users to stay connected on-the-go and more easily access their Android phones on their car displays— while staying focused on the road. Android Auto is currently available with nearly every major car manufacturer and is on track to be in more than 100 million cars in the coming months. Many car manufacturers, including General Motors, BMW and Kia, have also added support for wireless connections, making it easier for drivers to use Android Auto as soon as they get into their car. We’re continuing to add new features to make the experience more seamless for users and help developers reach more drivers with in-car apps.

Expanding Android Auto’s app ecosystem

One of our most common requests for Android Auto continues to be support for more apps in the car. We currently have over 3,000 apps in Google Play whose in-car experiences have been purpose-built for driving.

Today, we’re showcasing our work with early access partners to build apps in new categories for Android Auto, including navigation, parking and electric vehicle charging. Using our new Android for Cars App Library, we’re able to ensure that all tasks within an app can be achieved with minimal glances or taps.

image

Early access partners for new apps on Android Auto

To mitigate driver distraction, we collaborated with government, industry and academic institutions to develop our own best practice guidelines that we apply to every aspect of our product development process. With our standard templates and guidelines, developers have the tools to easily optimize their apps for cars, without needing to become an expert in driver distraction.

Our early access partners will be releasing new apps to their beta testers by the end of this year. Pending additional testing and feedback, we then plan to make these APIs publicly available for all developers to build Android Auto apps in these categories.

Android

We're partnering with some of the leading navigation, parking and electric vehicle charging apps around the world including ChargePoint, SpotHero and Sygic.

Android Automotive OS adoption

More recently, we introduced Android Automotive OS as a full-stack, open source and highly customizable platform powering vehicle infotainment systems. With Android Automotive OS, car manufacturers are able to have apps and services like Google Assistant, Google Maps and Google Play built into vehicles so that a mobile device is not required for common activities like navigation, downloading third-party apps and listening to media. Polestar 2, the first car running Android Automotive OS with Google built in, is now on the road and available for customers globally. In addition, Volvo Cars, Renault, General Motors and more have announced plans for infotainment systems powered by Android Automotive OS with Google apps and services built-in.

Extending the reach of media apps in cars

As more manufacturers begin to ship cars with infotainment systems powered by Android Automotive OS, developers have the opportunity to deliver a seamless media experience using Google Play in the car. If you already have a media app for Android Auto, you can extend the reach by adding support for Android Automotive OS. The process for porting over your apps is simple with most of the work already done, just follow these steps.

Making it easier to develop media apps for Android Automotive OS

For the past year, we have been on a journey to allow app developers to design, develop, test and publish media apps directly on Google Play in the car. We are happy to share that this is now possible.

Android Auto image Image of Polestar 2 and Google Generic Automative system

Polestar 2 and Google Generic Automotive system images for Android emulator

We have made updates to the Android Automotive OS design guidelines and development documentation for you to add support for your media apps. We also launched updates to the emulator to include Google Assistant, Google Maps and Google Play, so you can develop and test your apps in an environment that more closely mirrors the software in the car. The Polestar 2 system image enables you to test your app on similar software that is available on the road today. Lastly, the Play Console now accepts Android Automotive OS APKs, enabling you to simply upload your app for quality review and publishing. These changes allow developers to seamlessly complete the end-to-end development process for Android Automotive OS.

Image of Google Play features

Google Play features many media apps today, including Spotify, iHeartRadio, NPR One and more.

To learn more about how to create an app for Android Automotive OS, look out for updates or post on the automotive-developers Google Group or Stack Overflow using android-automotive tags.

With new app expansion on Android Auto and improved development tools for Android Automotive OS, developers have more opportunity than ever to reach users with app experiences optimized for the car. Head over to developer.android.com/cars to get started!

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Bringing internet access to millions more Indians with Jio



Today we signed an agreement to invest $4.5 billion (INR 33,737 crore) in Jio Platforms Ltd, taking a 7.73 percent stake in the company, pending regulatory review in India. This is the first investment from the Google For India Digitization Fund announced earlier this week, which aims to accelerate India’s digital economy over the next five to seven years through a mix of equity investments, partnerships, and operational, infrastructure and ecosystem investments. 


Google and Jio Platforms have entered into a commercial agreement to jointly develop an entry-level affordable smartphone with optimizations to the Android operating system and the Play Store. Together we are excited to rethink, from the ground up, how millions of users in India can become owners of smartphones. This effort will unlock new opportunities, further power the vibrant ecosystem of applications and push innovation to drive growth for the new Indian economy.


This partnership comes at an exciting but critical stage in India’s digitization. It’s been amazing to see the changes in technology and network plans that have enabled more than half a billion Indians to get online. At the same time, the majority of people in India still don’t have access to the internet, and fewer still own a smartphone—so there’s much more work ahead. 


Our mission with Android has always been to bring the power of computing to everyone, and we’ve been humbled by the way Indians have embraced Android over recent years. We think the time is right to increase our commitment to India significantly, in collaboration with local companies, and this partnership with Jio is the first step. We want to work with Jio and other leaders in the local ecosystem to ensure that smartphones—together with the apps and services in the Play Store—are within reach for many more Indians across the country. And we believe the pace of Indian innovation means that the experiences we create for India can ultimately be expanded to the rest of the world.  


For Google, our work in India goes to the heart of our efforts to organize the world’s information and make it universally accessible. We opened our first Indian campuses in Bangalore and Hyderabad in 2004. Since then, we’ve made India central to our Next Billion Users initiative—designed to ensure the internet is useful for people coming online for the first time. We’ve improved our apps and services so they’re relevant in more Indian languages and created offline versions for those facing network constraints. We’ve extended our tools to small businesses, sought to close digital divides with initiatives like Internet Saathi, and we’re increasingly focused on helping India harness AI. More and more, apps we create for India—like Google Pay or our Read Along language-learning app—influence what we do globally. 


Jio, for its part, has made an extraordinary contribution to India’s technological progress over the past decade. Its investments to expand telecommunications infrastructure, low-cost phones and affordable internet have changed the way its hundreds of millions of subscribers find news and information, communicate with one another, use services and run businesses. Today, Jio is increasing its focus on the development of areas like digital services, education, healthcare and entertainment that can support economic growth and social inclusion at a critical time in the country’s history. 


In partnership, we can draw on each other’s strengths. We look forward to bringing smartphone access to more Indians—and exploring the many ways we can work together to improve Indians’ lives and advance India’s digital economy.

Posted by Sanjay Gupta, Country Head & Vice President, Google India, and Sameer Samat, VP, Product Management

11 Weeks of Android: Android Developer Tools

Posted by Jamal Eason, Product Manager, Android

11 Weeks of Android, Week 7 with badge

This blog post is part of a weekly series for #11WeeksOfAndroid. For each of the #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Android Developer Tools; here’s a look at what you should know.

The big news

During the 11 weeks of Android, we launched a range of developer tool updates in Android Studio. As of today, you can find version 4.0 of Android Studio on the stable release channel, version 4.1 on the beta channel, and the very latest features of version 4.2 on the canary channel. The focus across each of these versions is a balance of app productivity and delivery of a high quality product that you can rely on for app development. For each day of this past week we highlighted improvements and tips in the key points of your development flow from app design, coding, deployment, build, app testing with the emulator, to app performance profiling. This blog highlights the content that we released during the Android Developer Tools week of 11 Weeks of Android.

What to watch and read

To see an overview of what is new in Android Developer Tools across the recent releases of Android Studio, check out this video from the #Android11 Beta launch which includes an exciting and in-depth demo.

What’s New in Android Development Tools

Design

At the beginning of the week we had a day of content focused on app design tools for developers. To start, watch this overview video of the latest updates in design tools:

What’s new in Design Tools

We also posted two in-depth blog posts for the design tools day:

  • Introducing the Motion Editor - provides a quick tour of the new Motion Editor and how to use the latest features to create animations for your app.

To debug your layouts, watch our video on the updates to the layout inspector:

Debugging UI issues with Layout Inspector

And lastly for design tools, we released a video about the latest developments for Jetpack Compose Design tools:

What's new in Compose Design Tools

Coding & Deployment

During the week, we posted tips and tricks to improve your coding experience and app deployment flow in Android Studio. Check out the following social media channels to review the latest postings:

  • @androidstudio - the Twitter channel for the official IDE for Android app development.
  • @androiddev - delivers news and announcements for developers from the Android team at Google

We also shared a new video on how to use the new database inspector in Android Studio:

Database Inspector

Additionally, you will find an updated blog on the development tools we have in place for Jetpack Hilt:

Build

In the middle of the week, we released four blogs posts around the build system in Android developer tools, which included:

  • Configuration Caching deep dive - a technical explanation on this new preview feature from Gradle and how to try it out in your project to speed up your builds.
  • Shrinking Your App with R8 - provides an overview of the features available in R8, the reduction in code size you might expect, and show how to enable these features in R8.

Android Emulator

On top of sharing a series of best practices and tips on social media about using the Android Emulator during the week, you can also a full summary in the following in-depth article:

Performance Profilers

We know improving app performance is critical for a great user experience. Therefore, we ended the week with a day on performance profilers content. To start, we posted a video about System Trace and how you can use it to troubleshoot app performance issues:

Troubleshooting app performance issues with System Trace in Android Studio

Plus, we published a blog post on C++ memory profiling:

Learning path

If you’re looking for an easy way to pick up the highlights of this week, check out the Developer Tools pathway. A pathway is an ordered tutorial that allows users to complete a pre-defined module that culminates in a quiz. It includes videos and blog posts. A virtual badge is awarded to each user who passes the quiz. Test your knowledge of key takeaways about Developer Tools to earn a limited edition badge.

Key takeaways

Thank you for tuning in and learning about the latest in Android Development tools. Thanks to all of you who chatted with us during the Reddit AMA this week. Throughout this past week, we showcased features that can be found either in the latest stable release or the canary release channel of Android Studio. If you want to try out what you learned this week, download Android Studio today.

Below, you will find a quick listing of where you will find each of the major features. Note, that features in non-stable versions may not land in a particular version until they have reached our quality bar:

Features found in Android Studio 4.0 (Stable Channel)

  • Motion Editor
  • Layout Inspector
  • Layout Validation
  • Custom View Preview
  • CPU Profiler Update
  • R8 Rules Editing
  • Build Analyzer
  • Dynamic Feature Dependency
  • Clangd support
  • Intellij 2019.3

Features found in Android Studio 4.1 (Beta Channel)

  • Database Inspector
  • Dependency Injection Tools
  • Faster Apply Changes
  • Gradle Configuration Caching (Preview)
  • Custom View Preview
  • Android Emulator in IDE
  • Instrumentation Testing
  • Profiler UI Updates
  • Native Memory Profiling
  • System Trace 2.0
  • New Gradle API
  • MLKit & TFLite Model Import
  • Intellij 2020.1

Features found in Android Studio 4.2 + (Canary Channel)

  • Compose Interactive Preview
  • Compose Animation Visualization
  • Compose Deploy to Device
  • Sample Data API for Compose
  • Compose Editing Support
  • Test Failure Retention
  • Android Emulator- 5G Connectivity and Foldable Support
  • Intellij 2020.2 - coming soon

Resources

You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!

Improving inter-activity communication with Jetpack ActivityResult

Posted by Yacine Rezgui, Developer Advocate

Whether you're requesting a permission, selecting a file from the system file manager, or expecting data from a 3rd party app, passing data between activities is a core element in inter-process communication on Android. We’ve recently released the new ActivityResult APIs to help handle these activity results.

Previously, to get results from started activities, apps needed to implement an onActivityResult() method in their activities and fragments, check which requestCode a result is referring to, verify that the requestCode is OK, and finally inspect its result data or extended data.

This leads to complicated code, and it doesn’t provide a type-safe interface for expected arguments when sending or receiving data from an activity.

What are the ActivityResult APIs?

The ActivityResult APIs were added to the Jetpack activity and fragment libraries, making it easier to get results from activities by providing type-safe contracts. These contracts define expected input and result types for common actions like taking a picture or requesting a permission, while also providing a way to create your own contracts.

The ActivityResult APIs provide components for registering for an activity result, launching a request, and handling its result once it is returned by the system. You can also receive the activity result in a separate class from where the activity is launched and still rely on the type-safe contracts.

How to use it

To demonstrate how to use the ActivityResult APIs, let’s go over an example where we’re opening a document.

First, you need to add the following dependencies to your gradle file:

repositories {
    google()
    maven()
}

dependencies {
  implementation "androidx.activity:activity:1.2.0-alpha02"
  implementation "androidx.activity:fragment:1.3.0-alpha02"
}

You need to register a callback along with the contract that defines its input and output types.

In this context, GetContent() refers to the ACTION_GET_DOCUMENT intent, and is one of the default contracts already defined in the Activity library. You can find the complete list of contracts here.

val getContent = registerForActivityResult(GetContent()) { uri: Uri? ->
    // Handle the returned Uri
}

Now we need to launch our activity using the returned launcher. As you can set a mime type filter when listing the selectable files, GetContent.launch() will accept a string as a parameter:

val getContent = registerForActivityResult(GetContent()) { uri: Uri? ->
    // Handle the returned Uri
}

override fun onCreate(savedInstanceState: Bundle?) {
    // ...

    val selectButton = findViewById<Button>(R.id.select_button)

    selectButton.setOnClickListener {
        // Pass in the mime type you'd like to allow the user to select
        // as the input
        getContent.launch("image/*")
    }
}

Once an image has been selected and you return to your activity, your registered callback will be executed with the expected results. As you saw through the code snippets, ActivityResult brings an easier developer experience when dealing with results from activities.

Start using Activity 1.2.0-alpha02 and Fragment 1.3.0-alpha02 for a type-safe way to handle your intent results with the new ActivityResult APIs.

Let us know what you think and how we can make it better by providing feedback on the issue tracker.

Decrease startup time with Jetpack App Startup

Posted by Yacine Rezgui, Developer Advocate and Rahul Ravikumar, Software Engineer

Jetpack image

Application startup time is a critical metric for any application. Users expect apps to be responsive and fast to load. When an application does not meet this expectation, it can be disappointing to users. This poor experience may cause a user to rate your app badly on the Play store, or even abandon your app altogether.

Jetpack App Startup is a library that provides a straightforward, performant way to initialize components at application startup. Both library developers and app developers can use App Startup to streamline startup sequences and explicitly set the order of initialization.

Apps and libraries often rely on having components (WorkManager, ProcessLifecycleObserver, FirebaseApp etc.) initialized before Application.onCreate(). This is usually achieved by using content providers to initialize each dependency. Instead of defining separate content providers for each component that needs to be initialized, App Startup lets you define initializers that share a single content provider. This significantly improves app startup time, usually by ~2ms per content provider. App Startup also helps you further improve startup performance by making it really easy to initialize components lazily. When App Startup goes stable, we will be updating our libraries like `WorkManager` and `ProcessLifecycle` to benefit from this as well.

App Startup supports API level 14 and above.

How to use it

Gradle setup

To use App Startup in your library or app, add the following dependency to your gradle file:

repositories {
    google()
    maven()
}

dependencies {
  implementation "androidx.startup:startup-runtime:1.0.0-alpha02"
}
Define an Initializer

To be able to use App Startup in your application, you need to define an Initializer. This is where you define how to initialize and specify your dependencies. Here’s the interface you need to implement:

interface Initializer<out T: Any> {
    fun create(context: Context): T
    fun dependencies(): List<Class<out Initializer<*>>>
}

As a practical example, here’s what an Initializer that initializes WorkManager might look like:

class WorkManagerInitializer : Initializer<WorkManager> {
    override fun create(context: Context): WorkManager {
        val configuration = Configuration.Builder()
            .setMinimumLoggingLevel(Log.DEBUG)
            .build()

        WorkManager.initialize(context, configuration)
        return WorkManager.getInstance(context)
    }
   
    // This component does not have any dependencies
    override fun dependencies() = emptyList<Class<out Initializer<*>>>()
}

Note: This example is purely illustrative. This Initializer should actually be defined by the WorkManager library.

Lastly, we need to add an entry for WorkManagerInitializer in the AndroidManifest.xml:

<provider
    android:name="androidx.startup.InitializationProvider"
    android:authorities="${applicationId}.androidx-startup"
    android:exported="false"
    tools:node="merge">
    <!-- This entry makes WorkManagerInitializer discoverable. -->
    <meta-data android:name="com.example.WorkManagerInitializer"
          android:value="androidx.startup" />
</provider>

How it works

App Startup uses a single content provider called InitializationProvider. This content provider discovers initializers by introspecting the <meta-data> entries in the merged AndroidManifest.xml file. This happens before Application.onCreate().

After the discovery phase, it subsequently initializes a component after having initialized all its dependencies. Therefore, a component is only initialized after all its dependencies have been initialized.

Lazy initialization

We highly recommend using lazy initialization to further improve startup performance. To make initialization of a component lazy, you need to do the following:

Add a tools:node="remove" attribute to the <meta-data> entry for the Initializer. This disables eager initialization.

<provider
    android:name="androidx.startup.InitializationProvider"
    android:authorities="${applicationId}.androidx-startup"
    android:exported="false"
    tools:node="merge">
    <!-- disables eager initialization -->
    <meta-data android:name="com.example.WorkManagerInitializer"
              tools:node="remove" />
</provider>

To lazily initialize WorkManagerInitializer you can then use:

// This returns an instance of WorkManager
AppInitializer.getInstance(context)
    .initializeComponent(WorkManagerInitializer.class);

Your app now initializes the component lazily. For more information, please read our detailed documentation here.

Final thoughts

App Startup is currently in alpha-02. Find out more about how to use it from our documentation. Once you try it out, help us make it better by giving us feedback on the issue tracker.