Sathish Kumar Anandapu – Technical Blogs http://www.easywaytech.com/blog Just another WordPress site Tue, 11 Dec 2018 09:44:51 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.2 Kafka http://www.easywaytech.com/blog/index.php/2018/11/27/kafka/ http://www.easywaytech.com/blog/index.php/2018/11/27/kafka/#respond Tue, 27 Nov 2018 05:31:26 +0000 http://www.easywaytech.com/blog/?p=270 Kafka is a publish-subscribe-based messaging system that is exchanging data between processes, applications and servers.

Messaging System:

A messaging system lets you send messages between processes, applications and servers.

Why we need Kafka:

  • We need an effective messaging system or platform which can capture the big data generating sources and try to analyze and present all the rightful information to the rightful sources at the right time.
  • It has built-in partitioning, replication, and fault-tolerance that makes it a good solution for large-scale message processing applications.

Kafka Components:

Kafka has 5 components in the cluster.

1) Zookeeeper:- It is primarily used as a configuration or registry type of index. It is a independent Apache project which kafka recommends and incorporates for its internal use, It is open source project which is highly available system. Primarily used for coordination and lookup service as a registry index in a distributed system. Primarily produces use zookeeper to identify the lead broker, so most of the times the producers interact with zookeeper to identify there are one two brokers in their cluster been set. Most of the producers interact with the zookeeper to identify the lead node/broker in the cluster

2) Broker:- Broker is nothing but a node/server in the cluster in which owns the topic

3) Topic:- It maintains the messages or bunch of the messages. These topics can be partitioned and distributed in multiple machines

4) Producer:- Producers are the ones which processes and publishes the incoming message or the activity data to the broker or to the cluster

5) Consumer:- Consumers are the processes which subscribe to the topics and pull all the msgs or the items or the data from the topics

Kafka Broker:

A Kafka cluster consists of one or more servers (Kafka brokers), which are running Kafka. Producers are processes that publish data (push messages) into Kafka topics within the broker. A consumer of topics pulls messages off a Kafka topic.

 

Kafka Topic:

A Topic is a category/feed name to which messages are stored and published. Messages are byte arrays that can store any object in any format. As said before, all Kafka messages are organized into topics. If you wish to send a message you send it to a specific topic and if you wish to read a message you read it from a specific topic. Producer applications write data to topics and consumer applications read from topics. Messages published to the cluster will stay in the cluster until a configurable retention period has passed by. Kafka retains all messages for a set amount of time and therefore, consumers are responsible to track their location.

Kafka topic partition:

Kafka topics are divided into a number of partitions, which contains messages in an unchangeable sequence. Each message in a partition is assigned and identified by its unique offset. A topic can also have multiple partition logs like the click-topic has in the image. This allows for multiple consumers to read from a topic in parallel.

In Kafka, replication is implemented at the partition level. The redundant unit of a topic partition is called a replica. Each partition usually has one or more replicas meaning that partitions contain messages that are replicated over a few Kafka brokers in the cluster. As we can see in the pictures – the click-topic is replicated to Kafka node 2 and Kafka node 3.

Note: It’s possible for the producer to attach a key to the messages and tell which partition the message should go to. All messages with the same key will arrive at the same partition.

Partitions allow you to parallelize a topic by splitting the data in a particular topic across multiple brokers.

Every partition (replica) has one server acting as a leader and the rest of them as followers. The leader replica handles all read-write requests for the specific partition and the followers replicate the leader. If the leader server fails, one of the follower servers become the leader by default. When a producer publishes a message to a partition in a topic, it is forwarded to its leader. The leader appends the message to its commit log and increments its message offset. Kafka only exposes a message to a consumer after it has been committed and each piece of data that comes in will be stacked on the cluster.

 

]]>
http://www.easywaytech.com/blog/index.php/2018/11/27/kafka/feed/ 0
SPOCK Framework http://www.easywaytech.com/blog/index.php/2017/08/11/spock-framework/ http://www.easywaytech.com/blog/index.php/2017/08/11/spock-framework/#respond Fri, 11 Aug 2017 09:24:21 +0000 http://www.easywaytech.com/blog/?p=190 Spock is a testing framework written in Groovy but able to test both Java and Groovy code. It is fully compatible with JUnit (it actually builds on top of the JUnit runner).

Spock allows dynamic method names:-

def “maximum of #a and #b is #c”() {

expect:

dao.maxNum(a, b) == c
where:
a | b || c
1 | 7 || 7
8 | 4 || 8
9 | 9 || 9
}
After executing the above code using JUnit, we can see the method names as

Mocking in Spock Framework:

In the following example, we are mocking StudentDao class. We can mock a class in two ways
1. StudentDao studentDao = Mock()
2. def studentDao = Mock(StudentDao)

class StudentServiceSpec extends Specification{
StudentDao studentDao = Mock()
StudentService studentService = new StudentService(studentDao)
def “inserting Student Details”(){
setup:
studentDao.insertStudent(_ as Student) >> 1
studentDao.getLastRecordId() >> 70
when:
def response = studentService.insertStudent(new Student())
def se = (Student)response.getEntity()
then:
response != null
se.getStudentId() == 70
}
}

WireMock:
      WireMock is great at mocking out HTTP APIs when writing integration tests.

Integration Test using WireMock in Spock Framework:

@UseModules(value=[StudentMiddleModule])
class StudentRestServiceIT extends Specification {
@Rule
public WireMockRule server = new WireMockRule(wireMockConfig().port(9000))
@Inject
IStudentMiddleService studentMiddleService
static String PATH = ‘/vod/accountInfo’
def ‘validate Get Student by accountNumber ‘() {
given:
server.stubFor(get(urlPathEqualTo(PATH))
.willReturn(aResponse()
.withStatus(200)
.withBodyFile(‘response.xml’))
)
when:
StudentMiddleMessage studentMiddleMessage = studentMiddleService.getStudent(‘8087300010143918’)
then:
studentMiddleMessage != null
studentMiddleMessage.getStudentSet() != null
studentMiddleMessage.getStudentSet().getCredit() == ‘4779.37’
studentMiddleMessage.getStudentSet().getStudentItems() != null
studentMiddleMessage.getStudentSet().getStudentItems().size() == 1
}
}

Why did we chose Spock:

  • Spock has built-in support for Mocking and Stubbing without an external library.
  • One for the killer features of Spock is the detail it gives when a test fails. JUnit only mentions the expected and actual value, where Spock records the surrounding running environment mentioning the intermediate results and allowing the developer to pinpoint the problem with greater ease than JUnit.

Conclusion:-

In Spock, we don’t have tests, we have specifications. These are normal Groovy classes that extend the Specification class, which is actually a JUnit class. Our class contains a set of specifications, represented by methods with funny-method-names-in-quotes. The funny-method-names-in-quotes take advantage of some Groovy magic to let us express our requirements in a very readable form. And since these classes are derived from JUnit, we can run them from within Eclipse like a normal Groovy unit test.

]]>
http://www.easywaytech.com/blog/index.php/2017/08/11/spock-framework/feed/ 0