20 results for “topic:jersey-jetty-rest”
A web-based machine learning experimentation platform using microservices architecture.
Guice WebServer Module - backed by latest Jetty, Jersey and Jackson.
Rhizomatic modular runtime
This is a seed project to start restful web development with Jersey using Kotlin.
Jetty 9.4, Jersey 2.7 and Guice 4.2.0 starter kit with SSL support and hk2-guice-bridge
Jersey 2.0 is the reference implementation of the JAX-RS 2.0 (JSR 339 specification). Along with the enhancements in Java EE 7, JAX-RS 2.0 has also been revised dramatically. JAX-RS 2.0 is a framework that helps you in writing the RESTful web services on the client side as well as on the server side.
Swagger and Jetty example with Jersey and Guice
An online attendance checking and exam registering system
A Service Discovery implementation for Zoo-Keeper: use the service to register and discover the status of your's services.
Handling Java 8 Objects and MongoDB ObjectId in Jersey RESTful
A simple RESTful web service realized with Maven, Jersey and Jetty
Restful Service deployable web archive created using jersey and in-memory h2 database for which Hibernate(JPA implementation) is used
Flight booking plateform application using Jersey, Jetty, Java, and ElasticSearch
Servidor web embarcado com suporte a serviços REST, independente de servidor de aplicação, para oferta de recursos da blockchain-ag Corrente
A simple web application to show the consuming and producing rest web service
American Names 1890-2010 (REST API, MongoDB)
Roman numeral conversion web service & Open Weather API proxy
Rest archetype with maven + jetty + jersey, built over mvnJpaHibernate project archetype
REST API for managing a Cafe shop to practice coding good practices.
The proposed system makes use of a crawler to gather information from every document on the website and store this information in the index. The index is a structured system of storing the unstructured data returned by the crawler. In this project, Nutch’s main component named ‘crawler’ is used for indexing and Solr is used for ‘searching’. The crawler fetches the pages and turns them into an inverted index. This inverted index (also called as ‘lucene index’) is used by the searcher to resolve user’s queries. Crawler and Searcher components can be scaled independently of each other.