Looking for new collaborators for this repo - open an issue if you would like to be considered
CognitiveJ - Image Analysis in Java
CognitiveJ is an open-source fluent Java (8) API that manages and orchestrates the interaction between Java applications and Microsoftsโ Cognitive (Project Oxford) Machine Learning & Image Processing libraries and allows you to query and analyze images.
Face service access is limited based on eligibility and usage criteria. You will need a valid subscription id to Azure Face service
Faces
- Facial Detection โ Capture faces, gender, age and associated facial features and landmarks from an image
- Emotion Detection โ Derive emotional state from faces within an image
- Verification โ Verify, with a confidence scale on whether 2 different faces are of the same person
- Identification โ Identify a person from a set of known people.
- Find Similar โ detect, group and rank similar faces
- Grouping โ group people based on facial characteristics
- Person Group/Person/Face Lists; Create, manage and train groups, face lists and persons to interact with the identification/grouping/find similar face features.
Vision
- Image Describe - Describe visual content of an image and return real world caption to what the Image is of.
- Image Analysis โ extract key details from an image and if the image is of an adult/racy nature.
- OCR โ detect and extract text from an image.
- Thumbnail โ Create thumbnail images based on key points of interest from the image.
Overlay (Experimental)
- Apply image layers onto images to visually represent found features.
- Apply captions onto faces and images
- Graphically illustrate the Faces/Vision feature sets.
- Pixelate faces in an image.
Other Features
- Works with local or remote images
- validation of parameters
Getting Started
- Java 8 or above
- Subscription keys for the MS Cognitive libraries (free registration here)
- Add the dependency from
JCenter(Since JCenter is no longer available, build from source until artifacts are published elsewhere)
Gradle
dependencies {
implementation 'cognitivej:cognitivej:0.6.2'
...
}
Maven
<dependency>
<groupId>cognitivej</groupId>
<artifactId>cognitivej</artifactId>
<version>0.6.2</version>
<type>pom</type>
</dependency>
Chained Builders - The builders are simple lightweight wrappers over the MS Cognitive REST calls that manages the marshalling of parameters/responses, the HTTP communications and retry strategies. The builders are chained to allow for follow up manipulation on resources that have been created or retrieved & where applicable.
Scenarios - Scenarios are real world use case classes that greatly simplifies the interaction between the builders and the wrapper classes. While there is no reason you canโt interact directly with the builders, scenarios have much of the boilerplate logic in place to reduce burden.
Overlay - Allows for creating and writing new images based on the results from the queries. Note: work is ongoing around collision detection and observing boundaries
Wrappers Simple domain wrappers around request/response/parameter objects (e.g. Face, FaceAttributes,Person etc)
Face โ Detect can detect faces from within an image and return the results as a collection of โfaceโ results.
Example
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE_URL);
imageOverlayBuilder.outlineFacesOnImage(faceScenarios.findFaces(IMAGE_URL), RectangleType.FULL,
CognitiveJColourPalette.STRAWBERRY).launchViewer();
}
Face โ Landmarks can detect faces from within an image and apply facial landmarks
Example
public static void main(String[] args) throws IOException {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
Face faces = faceScenarios.findSingleFace(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).outFaceLandmarksOnImage(faces).launchViewer();
}
Face โ Detect with Attributes displays associated attributes for detected faces
Example
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
List<Face> faces = faceScenarios.findFaces(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).outlineFacesOnImage(faces, RectangleType.CORNERED,
CognitiveJColourPalette.MEADOW).writeFaceAttributesToTheSide(faces, CognitiveJColourPalette.MEADOW).launchViewer();
}
Face โ Verify will validate (with a confidence ratio) if 2 different faces are of the same persons.
Example
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(CANDIDATE_1);
imageOverlayBuilder.verify(CANDIDATE_2, faceScenarios.verifyFaces(CANDIDATE_1, CANDIDATE_2)).launchViewer();
}
Face โ Identify will identify a person (or people) within an image. Before the library can identify, we need to provide the the Cognitive libraries with the samples set of candidates. Currently supports 1000 candidates.
Example
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE);
List<ImageHolder> candidates = candidates();
People people = ScenarioHelper.createPeopleFromHoldingImages(candidates, ImageNamingStrategy.DEFAULT);
String groupId = faceScenarios.createGroupWithPeople(randomAlphabetic(6).toLowerCase(), people);
}
Face โ Pixelate will identify all faces within an image and pixelate them.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder imageOverlayBuilder = ImageOverlayBuilder.builder(IMAGE);
faceScenarios.findFaces(IMAGE).stream().forEach(imageOverlayBuilder:: pixelateFaceOnImage);
imageOverlayBuilder.launchViewer();
}
Emotion โ Detect will detect what emotion a face(s) is showing within an image.
public static void main(String[] args) {
FaceScenarios faceScenarios = new FaceScenarios(getProperty("azure.cognitive.face.subscriptionKey"),
getProperty("azure.cognitive.emotion.subscriptionKey"));
ImageOverlayBuilder.builder(IMAGE_URL).outlineEmotionsOnImage(faceScenarios.findEmotionFaces(IMAGE_URL)).launchViewer();
}
Vision โ Describe will analyse and describe the contents of an image in a human readable caption.
public static void main(String[] args) {
ComputerVisionScenario computerVisionScenario = new ComputerVisionScenario(getProperty("azure.cognitive.vision.subscriptionKey"));
ImageDescription imageDescription = computerVisionScenario.describeImage(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).describeImage(imageDescription).launchViewer();
}
Vision โ OCR will analyse and extract text from within an image into a computer understandable stream.
public static void main(String[] args) {
ComputerVisionScenario computerVisionScenario = new ComputerVisionScenario(getProperty("azure.cognitive.vision.subscriptionKey"));
OCRResult ocrResult = computerVisionScenario.ocrImage(IMAGE_URL);
ImageOverlayBuilder.builder(IMAGE_URL).ocrImage(ocrResult).launchViewer();
}