markhuyong's garden


  • Home

  • Categories

  • Archives

  • Tags

elasticsearch intro

Posted on 2018-06-11 | In elasticsearch

How to use

Take it easy

The Mists of Time

Many years ago, a newly married unemployed developer called Shay Banon followed his wife to London, where she was studying to be a chef. While looking for gainful employment, he started playing with an early version of Lucene, with the intent of building his wife a recipe search engine.

Working directly with Lucene can be tricky, so Shay started work on an abstraction layer to make it easier for Java programmers to add search to their applications. He released this as his first open source project, called Compass.

Later Shay took a job working in a high-performance, distributed environment with in-memory data grids. The need for a high-performance, real-time, distributed search engine was obvious, and he decided to rewrite the Compass libraries as a standalone server called Elasticsearch.

The first public release came out in February 2010. Since then, Elasticsearch has become one of the most popular projects on GitHub with commits from over 300 contributors. A company has formed around Elasticsearch to provide commercial support and to develop new features, but Elasticsearch is, and forever will be, open source and available to all.

Shay’s wife is still waiting for the recipe search…

« Getting Started

  • elastic/elasticsearch-definitive-guide: The Definitive Guide to Elasticsearch
  • ElasticSearch Cookbook, Second Edition: Alberto Paro: 9781783554836: Amazon.com: Books

use cases

  • Tag Cloud
    1527839724538
  • Heatmaps
    1527839786708
  • Vector Maps
    1527839847776
  • Detect Abnormal
    1527842535887

    1
    2
    3
    4
    UID: 0000000000600036

    // detect abnormal
    tags:watchdog_frame AND message: (+0000000000600036)
  • Wikipedia uses Elasticsearch to provide full-text search with highlighted search snippets, and search-as-you-type and did-you-mean suggestions.

    • Wikimedia moving to Elasticsearch – Wikimedia Blog
    • Logstash - Wikitech
  • The Guardian uses Elasticsearch to combine visitor logs with social-network data to provide real-time feedback to its editors about the public’s response to new articles.
    1527831641278

    • The Guardian uses the Elastic Stack to deliver real-time visibility of site traffic | Elastic
    • Making Journalism Better With Elasticsearch | Elastic
    • How Elasticsearch Powers the Guardian’s Newsroom
  • Stack Overflow combines full-text search with geolocation queries and uses more-like-this to find related questions and answers.

    • Stack Overflow Uses Facets and Geo-Coding | Elastic
    • How does Stack Overflow implement its search indexing? - Meta Stack Exchange
    • A new search engine for Stack Exchange - Meta Stack Exchange
    • Nick Craver - Stack Overflow: The Architecture - 2016 Edition
    • StackOverflow Update: 560M Pageviews a Month, 25 Servers, and It’s All About Performance - High Scalability -
  • GitHub uses Elasticsearch to query 130 billion lines of code.
    1527833552771

    • GitHub uses Elasticsearch to index over 8 million code repositories | Elastic
    • Elasticsearch in Anger: Stories from the GitHub Search Clusters | Elastic
  • 15 Companies Using the ELK Stack

use Date Range

1
2
// search log in the defined Range
logtime:[2018-05-01 TO 2018-05-30]

use tags and message to search frameLog

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
//search tags is watchdog_frame and message contains 82973 (orderId)
tags:watchdog_frame AND message: ( 82973)

// 下发命令
// OrderUpdateOperation(下发订单)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x81")

// TimeAdjustOperation(时间调整)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x04")

// AirConditionerOperation(空调配置)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x06")

// OutputOperation(output控制)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x82")

// OutputOperation(output控制)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x82")

// RealTimeOperation(实时操作)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84")

// start RealTimeOperation(实时操作)

// 请求上报状态信息报文命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x00")

// 请求上报版本号命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x01")

// 远程重启命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x02")

// reset命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x03")

// 服务器开门命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x04")

// 请求位置命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x05")

// 请求GSM信息命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x06")

// 设备锁命令
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x84" AND "command=0x07")

// end RealTimeOperation(实时操作)

// OutputModeConfiguration(输出模式)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x08")

// MultimediaOperation(多媒体操作)
tags:watchdog_frame AND message: (0000000000600042 AND "0x01,0x83")

// 上报报文

// RoutineStatusReport(空调关)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x03")

// RoutineStatusReport(空调关)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x03" AND "airConditionState" AND "State:Close")

// RoutineStatusReport(空调开)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x03" AND "airConditionState" AND "State:Open")

// RoutineStatusReport(开机报文)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x01")

// RoutineStatusReport(关机报文)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x02")

// RoutineStatusReport(协议看门狗重启报文)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x05")

// UserInputReport(用户输入事件报文)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x21")

// DoorStatusReport(门禁状态)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x22")

// GSMModuleReport(版本信息)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x92")

// SensorAlarmReport(门禁报警)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x31" AND "DoorAlarm")

// SensorAlarmReport(烟雾报警)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x31" AND "SmokeAlarm")

// PowerEventReport(电源事件)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x11")

// GSMModuleReport(GSM卡信息)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x92")

// LocationReport(定位)
tags:watchdog_frame AND message: (0000000000600042 AND "0x03,0x91")

// HeartBeat(心跳包)
tags:watchdog_frame AND message: (0000000000600042 AND "0x04,0xFF")

search ActionLog

1
2
3
logType: WARN
logType: INFO
logType: ERROR

more

1
2
// use regex
tags:watchdog_frame AND message: (+0000000000600042 AND 0x0?,0x0?)

Direct access to the Elasticsearch API

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
// indices setting
$ curl -XPUT https://<endpoint>/blog -d '{
"settings" : { "number_of_shards" : 3, "number_of_replicas" : 1 } }'

// create a post
$ curl -XPOST http://<endpoint>/blog/post/1 -d '{
"author":"jon handler",
"title":"Amazon ES Launch" }'

// bulk create posts
$ curl -XPOST https://<endpoint>/blog/post/_bulk -d '
{ "index" : { "index" : "blog", "type" : "post", "_id" : "2"}}
{"title":"Amazon ES for search", "author": "carl meadows"},
{ "index" : { "index":"blog", "type":"post", "_id":"3" } }
{ "title":"Analytics too", "author": "vivek sriram"}'

// search posts
$ curl -XGET http://<endpoint>/_search?q=ES
{
"took":16,
"timed_out":false,
"shards":{
"total":3,
"successful":3,
"failed":0
},
"hits":{
"total":2,
"max_score":0.13424811,
"hits":[
{
"index":"blog",
"type":"post",
"id":"1",
"score":0.13424811,
"source":{
"author":"jon handler",
"title":"Amazon ES Launch"
}
},
{
"index":"blog",
"type":"post",
"id":"2",
"score":0.11506981,
"_source":{
"title":"Amazon ES for search",
"author":"carl meadows"
},

}
]
}
}

// Is it running?
http://47.97.104.178:9200/?pretty
{
"name" : "Vvew76v",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "RRMIEU-MQUS-BE5sVOKDKQ",
"version" : {
"number" : "6.1.1",
"build_hash" : "bd92e7f",
"build_date" : "2017-12-17T20:23:25.338Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
  • Query String Query | Elasticsearch Reference [6.x] | Elastic

How to view

Why Elastic

Distributed & Scalable

  • Resilient; designed for scale-out

  • High availability; multitenancy

  • Structured & unstructured data

Developer Friendly

  • Schemaless

  • Native JSON

  • Client libraries

  • Apache Lucene

Search & Analytics

  • Near Real-time

  • Full-text search

  • Aggregations

  • Geospatial

  • Multilingua

The Elastic Stack

1527665943347

Architecture

1527670661895

Elasticsearch Cluster deploy

1527836752076

TERMINOLOGY

MySQL Elastic Search
Database Index
Table Type
Row Document
Column Field
Schema Mapping
Index Everything is indexed
Partition Shard
SQL Query DSL
SELECT field, COUNT(*)FROM table GROUP BY field Facet(Aggregations)
SELECT * FROM table … GET http://…
UPDATE table SET … PUT http://…

How to understand

Basic Concepts

  • Cluster:
    A cluster consists of one or more nodes which share the same cluster name. Each cluster has a single master node which is chosen automatically by the cluster and which can be replaced if the current master node fails
  • Node:
    A node is a running instance of ElasticSearch which belongs to a cluster. Multiple nodes can be started on a single server for testing purposes, but usually you should have one node per server. At startup, a node will use unicast for multicast, if specified) to discover an existing cluster with the same cluster name and will try to join that cluster.

  • Index:
    An index is like a database in a relational database. It has a mapping which defines multiple types. An index is a logical namespace which maps to one or more primary shards and can have zero or more replica shards.

  • Type:
    A type is like a table in a relational database. Each type has a list of fields that can be specified for documents of that type. The mapping defines how each field in the document is analyzed.

  • Document:
    A document is a SON document which is stored in ElasticSearch. it is like a row in a table in a relational database. Each
    document is stored in an index and has a type and an id a document is a JSON object(also known in other languages as a hash/hashmap/associative array) which contains zero or more fields, or key-value pairs. The original JSON document that is indexed will be stored in the _source field, which is returned by default when getting or searching for a document.

  • Field:
    A document contains a list of fields, or key-value pairs. The value can be a simple(scalar) value (eg a string, integer, date),or a nested structure like an array or an object. A field is similar to a column in a table in a relational database. The mapping for each field has a field type (not to be confused with document type)which indicates the type of data that can be stored in that field, eg integer, string, object. The mapping also allows you to define (amongst other things) how the value for a field should be analyzed.

  • Mapping:
    A mapping is like a ‘schema definition’ in a relational database. Each index has a mapping, which defines each type within the index, plus a number of index-wide settings A mapping can either be defined explicitly, or it will be generated automatically when a document is indexed.

  • Facets(Aggregations):
    Faceted search refers to a way to explore large amounts of data by displaying summaries about various partitions of the data and later allowing to narrow the navigation to a specific partition.
    In Elasticsearch, facets are also the name of a feature that allowed to compute these summaries. facets have been replaced by aggregations in Elasticsearch 1.0, which are a superset of facets.

  • Shard:
    A shard is a single Lucene instance. It is a low-level “worker” unit which is managed automatically by ElasticSearch. An index is a logical namespace which points to primary and replica shards.
    ElasticSearch distributes shards amongst all nodes in the cluster, and can move shards automatically from one node to another in the case of node failure, or the addition of new nodes

    • lucene - need elasticsearch index sharding explanation - Stack Overflow
  • Primary Shard:
    Each document is stored in a single primary shard. When a document is send for indexing, it is indexed first on the primary shard, then on all replicas of the primary shard. By default, an index has 5 primary shards. You can specify fewer or more primary shards to scale the number of documents that your index can handle.

  • Replica Shard:
    Each primary shard can have zero or more replicas. A replica is a copy of the primary shard, and has two purposes:
    a. increase failover: a replica shard can be promoted to a primary shard if the primary fails.
    b. increase performance: get and search requests can be handled by primary or replica shards.

  • Identified by “_index/_type/_id”

Configuration

  • cluster.name:
    Cluster name identifies duster for auto-discovery If production environment has multiple clusters on the same network, duster name must be unique.
  • node.name:
    Node names are generated dynamically on startup. But user can specify a name to node manual.
  • node.master&node.data:
    Every node can be configured to allow or deny being eligible as the master, and to allow or deny to store the data, Master allow this node to be eligible as a master node(enabled by default)and Data allow this node to store data (enabled by default)

Following are the settings to design advanced duster topologies.

  1. If a node to never become a master node, only to hold data. This will be the” workhorse”of the duster.
    node master: false, node data: true

  2. If a node to only serve as a master and not to store data and to have free resources. This will be the “coordinator” of the cluster.
    node master: true, node data: false

  3. If a node to be neither master nor data node, but to act as a”search load balancer”(fetching data from nodes, aggregating, etc)
    node master: false, node data: false

  • Index:
    A number of options(such as shard/replica options, mapping or analyzer definitions, translog settings, .) can be set for indices globally, in this file.
    Note, that it makes more sense to configure index settings specifically for a certain index, either when creating it or by using the index templates API..
    example.
    index.number_of_shards: 5, index.number_of_replicas: 1

  • Discovery:
    ElasticSearch supports different types of discovery, which makes multiple ElasticSearch instances talk to each other.
    The default type of discovery is multicast. Unicast discovery allows to explicitly control which nodes will be used to discover the duster. it can be used when multicast is not present, or to restrict the duster communication-wise.

Index Versus Index Versus Inverted Index

invertedIndex

You may already have noticed that the word index is overloaded with several meanings in the context of Elasticsearch. A little clarification is necessary:

  • Index (noun)

    As explained previously, an index is like a database in a traditional relational database. It is the place to store related documents. The plural of index is indices or indexes.

  • Index (verb)

    To index a document is to store a document in an index (noun) so that it can be retrieved and queried. It is much like the INSERT keyword in SQL except that, if the document already exists, the new document would replace the old.

  • Inverted index

    Relational databases add an index, such as a B-tree index, to specific columns in order to improve the speed of data retrieval. Elasticsearch and Lucene use a structure called an inverted index for exactly the same purpose. By default, every field in a document is indexed (has an inverted index) and thus is searchable. A field without an inverted index is not searchable. We discuss inverted indexes in more detail in Inverted Index.

    • A first take at building an inverted index
    • Th30z (Matteo Bertozzi Code): Python: Inverted Index for dummies

Cluster Architecture

1527670820582

  • Partitioning your documents into different containers or shards, which can bestored on a single node or on multiple nodes.
  • Balancing these shards across the nodes in your cluster to spread the indexing andsearch load.
  • Duplicating each shard to provide redundant copies of your data, to prevent dataloss in case of hardware failure.
  • Routing requests from any node in the cluster to the nodes that hold the data you’reinterested in.
  • Seamlessly integrating new nodes as your cluster grows or redistributing shards torecover from node loss.

Index Request

1527845763041

1
2
3
4
5
6
7
8
9
10
Request :
PUT test/cities/1 {
"rank": 3,
"city": "Hyderabad",
"state": "Telangana", "population2014": 7750000, "land_area": 625, "location":
{
"lat": 17.37,
"lon": 78.48 },
"abbreviation": "Hyd" }
Response : { "_index": "test", "_type": "cities", "_id": "1", "_version": 1, "created": true }

Search Request

1527845880572

1
2
3
4
5
6
7
8
9
10
11
Request :
GET test/cities/1?pretty
Response : {
"_index": "test", "_type": "cities", "_id": "1", "_version": 1, "found": true, "_source": {
"rank": 3,
"city": "Hyderabad",
"state": "Telangana", "population2014": 7750000, "land_area": 625, "location": {
"lat": 17.37,
"lon": 78.48 },
"abbreviation": "Hyd" }
}

Updating a document

1
2
3
4
5
6
7
8
9
10
11
Request :
PUT test/cities/1 {
"rank": 3,
"city": "Hyderabad",
"state": "Telangana", "population2013": 7023000, "population2014": 7750000, "land_area": 625,
"location":
{
"lat": 17.37,
"lon": 78.48 },
"abbreviation": "Hyd" }
Response : {"_index": "test", "_type": "cities", "_id": "1", "_version": 2, "created": false}

Searching

  • Search across all indexes and all types
    http://localhost:9200/_search

  • Search across all types in the test index.
    http://localhost:9200/test/_search

  • Search explicitly for documents of type cities within the test index.
    http://localhost:9200/test/cities/_search

Reference:

  • Database index - Wikipedia
  • Inverted index - Wikipedia
  • B+ tree - Wikipedia
  • Aggregation features, Elasticsearch vs. MySQL (vs. MongoDB) - Ulf WendelUlf Wendel
  • ElasticSearch-Head
  • Marvel
  • Paramedic
  • Bigdesk

trell-search

Posted on 2018-04-24 | In tools

Special Operators

  • Searching for Cards (All Boards) - Trello Help

Search operators refine your search to help you find specific cards and create highly tailored lists. Trello will suggest operators for you as you type, but here’s a full list to keep in mind.

These operators will also work in the “Archive” search bar. See Archiving and deleting cards for more details.

-operator - You can add “-” to any operator to do a negative search, such as -has:members to search for cards without any members assigned.

@name - Returns cards assigned to a member. If you start typing @, Trello will suggest members for you. member: also works. @me will include only your cards.
label: - Returns labeled cards. Trello will suggest labels for you if you start typing a name or color. For example, label:”FIX IT” will return cards with the label named “FIX IT”. #label also works.

board:id - Returns cards within a specific board. If you start typing board:, Trello will suggest boards for you. You can search by board name, too, such as “board:trello” to search only cards on boards with trello in the board name.

1
2

-board:parkbox spring // search spring except board parkbox

list:name - Returns cards within the list named “name”. Or whatever you type besides “name”.

has:attachments - Returns cards with attachments. has:description, has:cover, has:members, and has:stickers also work as you would expect.

due:day - Returns cards due within 24 hours. due:week returns cards that are due within the following 7 days. due:month, and due:overdue also work as expected. You can search for a specific day range. For example, adding due:14 to search will include cards due in the next 14 days. You can also search for due:complete or due:incomplete to search for due dates that are marked as complete or incomplete.

created:day - Returns cards created in the last 24 hours. created:week and created:month also work as expected. You can search for a specific day range. For example, adding created:14 to the search will include cards created in the last 14 days.

edited:day - Returns cards edited in the last 24 hours. edited:week and edited:month also work as expected. You can search for a specific day range. For example, adding edited:21 to the search will include cards edited in the last 21 days.

description:, checklist:, comment:, and name: - Returns cards matching the text of card descriptions, checklists, comments, or names. For example, comment:”FIX IT” will return cards with “FIX IT” in a comment.

is:open returns open cards. is: archived returns archived cards. If neither is specified, Trello will return both types.

is:starred - Only include cards on starred boards.

spring-framework-annotations

Posted on 2018-04-19 | In spring
  • A Guide to Spring Framework Annotations - DZone Java

The Java programming language provided support for annotations from Java 5.0 onward. Leading Java frameworks were quick to adopt annotations, and the Spring Framework started using annotations from the 2.5 release. Due to the way they are defined, annotations provide a lot of context in their declaration.

Prior to annotations, the behavior of the Spring Framework was largely controlled through XML configuration. Today, the use of annotations provide us tremendous capabilities in how we configure the behaviors of the Spring Framework.

In this post, we’ll take a look at the annotations available in the Spring Framework.

Core Spring Framework Annotations

@Required

This annotation is applied to bean setter methods. Consider a scenario where you need to enforce a required property. The @Required annotation indicates that the affected bean must be populated at configuration time with the required property. Otherwise, an exception of type BeanInitializationException is thrown.

@Autowired

This annotation is applied to fields, setter methods, and constructors. The @Autowired annotation injects object dependency implicitly.

When you use @Autowired on fields and pass the values for the fields using the property name, Spring will automatically assign the fields with the passed values.

You can even use @Autowired on private properties, as shown below. (This is a very poor practice though!)

1
2
3
4
5
6
7
8
9
public class Customer {

@Autowired

private Person person;

private int type;

}

When you use @Autowired on setter methods, Spring tries to perform it by Type autowiring on the method. You are instructing Spring that it should initiate this property using a setter method where you can add your custom code, like initializing any other property with this property.

1
2
3
4
5
6
7
public class Customer {            
private Person person;
@Autowired
public void setPerson (Person person) {
this.person=person;
}
}

Consider a scenario where you need an instance of class A, but you do not store A in the field of the class. You just use A to obtain an instance of B, and you are storing B in this field. In this case, setter method autowiring will better suit you. You will not have class-level unused fields.

When you use @Autowired on a constructor, then constructor injection happens at the time of object creation. It tells the constructor to autowire when used as a bean. One thing to note here is that only one constructor of any bean class can carry the @Autowired annotation.

1
2
3
4
5
6
7
8
@Component
public class Customer {
private Person person;
@Autowired
public Customer (Person person) {
this.person=person;
}
}

NOTE: As of Spring 4.3, @Autowired became optional on classes with a single constructor. In the above example, Spring would still inject an instance of the Person class if you omitted the @Autowired annotation.

@Qualifier

This annotation is used along with the @Autowired annotation. When you need more control of the dependency injection process, @Qualifier can be used. @Qualifier can be specified on individual constructor arguments or method parameters. This annotation is used to avoid the confusion that occurs when you create more than one bean of the same type and want to wire only one of them with a property.

Consider an example where an interface BeanInterface is implemented by two beans, BeanB1 and BeanB2.

1
2
3
4
5
6
7
8
@Component
public class BeanB1 implements BeanInterface {
//
}
@Component
public class BeanB2 implements BeanInterface {
//
}

Now if BeanA autowires this interface, Spring will not know which one of the two implementations to inject.

One solution to this problem is the use of the @Qualifier annotation.

1
2
3
4
5
6
7
@Component
public class BeanA {
@Autowired
@Qualifier("beanB2")
private IBean dependency;
...
}

With the @Qualifier annotation added, Spring will now know which bean to autowire, where beanB2 is the name of BeanB2.

@Configuration

This annotation is used on classes that define beans. @Configuration is an analog for an XML configuration file – it is configuration using Java classes. A Java class annotated with @Configuration is a configuration by itself and will have methods to instantiate and configure the dependencies.

Here is an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Configuartion
public class DataConfig {
@Bean
public DataSource source() {
DataSource source = new OracleDataSource();
source.setURL();
source.setUser();
return source;
}
@Bean
public PlatformTransactionManager manager() {
PlatformTransactionManager manager = new
BasicDataSourceTransactionManager();
manager.setDataSource(source());
return manager;
}
}

@ComponentScan

This annotation is used with the @Configuration annotation to allow Spring to know the packages to scan for annotated components. @ComponentScan is also used to specify base packages using basePackageClasses or basePackage attributes to scan. If specific packages are not defined, scanning will occur from the package of the class that declares this annotation.

@Bean

This annotation is used at the method level. The @Bean annotation works with @Configuration to create Spring beans. As mentioned earlier, @Configuration will have methods to instantiate and configure dependencies. Such methods will be annotated with @Bean. The method annotated with this annotation works as the bean ID, and it creates and returns the actual bean.

Here is an example:

1
2
3
4
5
6
7
8
9
10
11
@Configuration
public class AppConfig {
@Bean
public Person person() {
return new Person(address());
}
@Bean
public Address address() {
return new Address();
}
}

@Lazy

This annotation is used on component classes. By default, all autowired dependencies are created and configured at startup. But if you want to initialize a bean lazily, you can use the @Lazy annotation over the class. This means that the bean will be created and initialized only when it is first requested for. You can also use this annotation on @Configuration classes. This indicates that all @Bean methods within that @Configuration should be lazily initialized.

@Value

This annotation is used at the field, constructor parameter, and method parameter levels. The @Value annotation indicates a default value expression for the field or parameter to initialize the property with. As the @Autowired annotation tells Spring to inject an object into another when it loads your application context, you can also use the @Value annotation to inject values from a property file into a bean’s attribute. It supports both #{…} and ${…} placeholders.

Spring Framework Stereotype Annotations

@Component

This annotation is used on classes to indicate a Spring component. The @Component annotation marks the Java class as a bean or component so that the component-scanning mechanism of Spring can add it into the application context.

@Controller

The @Controller annotation is used to indicate the class is a Spring controller. This annotation can be used to identify controllers for Spring MVC or Spring WebFlux.

@Service

This annotation is used on a class. @Service marks a Java class that performs some service, such as executing business logic, performing calculations, and calling external APIs. This annotation is a specialized form of the@Component annotation intended to be used in the service layer.

@Repository

This annotation is used on Java classes that directly access the database. The @Repository annotation works as a marker for any class that fulfills the role of repository or Data Access Object.

This annotation has an automatic translation feature. For example, when an exception occurs in the @Repository, there is a handler for that exception and there is no need to add a try-catch block.

Spring Boot Annotations

@EnableAutoConfiguration

This annotation is usually placed on the main application class. The @EnableAutoConfiguration annotation implicitly defines a base “search package”. This annotation tells Spring Boot to start adding beans based on classpath settings, other beans, and various property settings.

@SpringBootApplication

This annotation is used on the application class while setting up a Spring Boot project. The class that is annotated with the @SpringBootApplication must be kept in the base package. The one thing that the@SpringBootApplication does is a component scan. But it will scan only its sub-packages. As an example, if you put the class annotated with @SpringBootApplication in com.example, then @SpringBootApplication will scan all its sub-packages, such as com.example.a, com.example.b, and com.example.a.x.

The @SpringBootApplication is a convenient annotation that adds all the following:

  • @Configuration
  • @EnableAutoConfiguration
  • @ComponentScan

Spring MVC and REST Annotations

@Controller

This annotation is used on Java classes that play the role of controller in your application. The @Controller annotation allows autodetection of component classes in the classpath and auto-registering bean definitions for them. To enable autodetection of such annotated controllers, you can add component scanning to your configuration. The Java class annotated with @Controller is capable of handling multiple request mappings.

This annotation can be used with Spring MVC and Spring WebFlux.

@RequestMapping

This annotation is used at both the class and method level. The @RequestMapping annotation is used to map web requests onto specific handler classes and handler methods. When @RequestMapping is used on the class level, it creates a base URI for which the controller will be used. When this annotation is used on methods, it will give you the URI on which the handler methods will be executed. From this, you can infer that the class level request mapping will remain the same whereas each handler method will have their own request mapping.

Sometimes you may want to perform different operations based on the HTTP method used, even though the request URI may remain the same. In such situations, you can use the method attribute of @RequestMapping with an HTTP method value to narrow down the HTTP methods in order to invoke the methods of your class.

Here is a basic example of how a controller along with request mappings work:

1
2
3
4
5
6
7
8
@Controller
@RequestMapping("/welcome")
public class WelcomeController {
@RequestMapping(method = RequestMethod.GET)
public String welcomeAll() {
return "welcome all";
}
}

In this example, only GET requests to /welcome is handled by the welcomeAll() method.

This annotation also can be used with Spring MVC and Spring WebFlux.

The @RequestMapping annotation is very versatile. Please see my in-depth post on Request Mapping here.

@CookieValue

This annotation is used at method parameter level. @CookieValue is used as an argument of a request mapping method. The HTTP cookie is bound to the @CookieValue parameter for a given cookie name. This annotation is used in the method annotated with @RequestMapping.
Let us consider that the following cookie value is received with an HTTP request:

JSESSIONID=418AB76CD83EF94U85YD34W

To get the value of the cookie, use @CookieValue like this:

1
2
3
@ReuestMapping("/cookieValue")
public void getCookieValue(@CookieValue "JSESSIONID" String cookie){
}

@CrossOrigin

This annotation is used both at the class and method levels to enable cross-origin requests. In many cases, the host that serves JavaScript will be different from the host that serves the data. In such a case, Cross Origin Resource Sharing (CORS) enables cross-domain communication. To enable this communication, you just need to add the @CrossOrigin annotation.

By default, the @CrossOrigin annotation allows all origin, all headers, the HTTP methods specified in the@RequestMapping annotation, and a maxAge of 30 min. You can customize the behavior by specifying the corresponding attribute values.

An example of using @CrossOrigin at both the controller and handler method levels is below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@CrossOrigin(maxAge = 3600)
@RestController
@RequestMapping("/account")
public class AccountController {
@CrossOrigin(origins = "http://example.com")
@RequestMapping("/message")
public Message getMessage() {
// ...
}
@RequestMapping("/note")
public Note getNote() {
// ...
}
}

In this example, both the getExample() and getNote() methods will have a maxAge of 3600 seconds. Also, getExample() will only allow cross-origin requests from http://example.com, while getNote() will allow cross-origin requests from all hosts.

Composed @RequestMapping Variants

Spring framework 4.3 introduced the following method-level variants of @RequestMapping annotation to better express the semantics of the annotated methods. Using these annotations has become the standard ays of defining the endpoints. They act as wrappers to @RequestMapping.

These annotations can be used with Spring MVC and Spring WebFlux.

@GetMapping

This annotation is used for mapping HTTP GET requests onto specific handler methods. @GetMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.GET).

@PostMapping

This annotation is used for mapping HTTP POST requests onto specific handler methods. @PostMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.POST).

@PutMapping

This annotation is used for mapping HTTP PUT requests onto specific handler methods. @PutMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.PUT).

@PatchMapping

This annotation is used for mapping HTTP PATCH requests onto specific handler methods. @PatchMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.PATCH).

@DeleteMapping

This annotation is used for mapping HTTP DELETE requests onto specific handler methods. @DeleteMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.DELETE).

@ExceptionHandler

This annotation is used at method levels to handle exceptions at the controller level. The @ExceptionHandler annotation is used to define the class of exception it will catch. You can use this annotation on methods that should be invoked to handle an exception. The @ExceptionHandler values can be set to an array of Exception types. If an exception is thrown that matches one of the types in the list, then the method annotated with the matching @ExceptionHandler will be invoked.

@InitBinder

This annotation is a method-level annotation that plays the role of identifying the methods that initialize theWebDataBinder — a DataBinder that binds the request parameter to JavaBean objects. To customize request parameter data binding, you can use @InitBinder annotated methods within our controller. The methods annotated with @InitBinder includes all argument types that handler methods support.

The @InitBinder annotated methods will get called for each HTTP request if you don’t specify the value element of this annotation. The value element can be a single or multiple form names or request parameters that the init binder method is applied to.

@Mappings and @Mapping

This annotation is used on fields. The @Mapping annotation is a meta-annotation that indicates a web mapping annotation. When mapping different field names, you need to configure the source field to its target field, and to do that, you have to add the @Mappings annotation. This annotation accepts an array of @Mapping having the source and the target fields.

@MatrixVariable

This annotation is used to annotate request handler method arguments so that Spring can inject the relevant bits of a matrix URI. Matrix variables can appear on any segment each separated by a semicolon. If a URL contains matrix variables, the request mapping pattern must represent them with a URI template. The @MatrixVariable annotation ensures that the request is matched with the correct matrix variables of the URI.

@PathVariable

This annotation is used to annotate request handler method arguments. The @RequestMapping annotation can be used to handle dynamic changes in the URI where a certain URI value acts as a parameter. You can specify this parameter using a regular expression. The @PathVariable annotation can be used declare this parameter.

@RequestAttribute

This annotation is used to bind the request attribute to a handler method parameter. Spring retrieves the named attribute’s value to populate the parameter annotated with @RequestAttribute. While the @RequestParamannotation is used bind the parameter values from a query string, @RequestAttribute is used to access the objects that have been populated on the server side.

@RequestBody

This annotation is used to annotate request handler method arguments. The @RequestBody annotation indicates that a method parameter should be bound to the value of the HTTP request body. The HttpMessageConveter is responsible for converting from the HTTP request message to object.

@RequestHeader

This annotation is used to annotate request handler method arguments. The @RequestHeader annotation is used to map controller parameter to request header value. When Spring maps the request, @RequestHeader checks the header with the name specified within the annotation and binds its value to the handler method parameter. This annotation helps you to get the header details within the controller class.

@RequestParam

This annotation is used to annotate request handler method arguments. Sometimes you get the parameters in the request URL, mostly in GET requests. In that case, along with the @RequestMapping annotation, you can use the @RequestParam annotation to retrieve the URL parameter and map it to the method argument. The@RequestParam annotation is used to bind request parameters to a method parameter in your controller.

@RequestPart

This annotation is used to annotate request handler method arguments. The @RequestPart annotation can be used instead of @RequestParam to get the content of a specific multipart and bind it to the method argument annotated with @RequestPart. This annotation takes into consideration the “Content-Type” header in the multipart (request part).

@ResponseBody

This annotation is used to annotate request handler methods. The @ResponseBody annotation is similar to the@RequestBody annotation. The @ResponseBody annotation indicates that the result type should be written straight in the response body in whatever format you specify like JSON or XML. Spring converts the returned object into a response body by using the HttpMessageConveter.

@ResponseStatus

This annotation is used on methods and exception classes. @ResponseStatus marks a method or exception class with a status code and a reason that must be returned. When the handler method is invoked the status code is set to the HTTP response which overrides the status information provided by any other means. A controller class can also be annotated with @ResponseStatus, which is then inherited by all @RequestMapping methods.

@ControllerAdvice

This annotation is applied at the class level. As explained earlier, for each controller, you can use @ExceptionHandler on a method that will be called when a given exception occurs. But this handles only those exceptions that occur within the controller in which it is defined. To overcome this problem, you can now use the@ControllerAdvice annotation. This annotation is used to define @ExceptionHandler, @InitBinder, and @ModelAttribute methods that apply to all @RequestMapping methods. Thus, if you define the @ExceptionHandler annotation on a method in a @ControllerAdvice class, it will be applied to all the controllers.

@RestController

This annotation is used at the class level. The @RestController annotation marks the class as a controller where every method returns a domain object instead of a view. By annotating a class with this annotation, you no longer need to add @ResponseBody to all the RequestMapping methods. It means that you no long use view-resolvers or send HTML in response. You just send the domain object as an HTTP response in the format that is understood by the consumers, like JSON.

@RestController is a convenience annotation that combines @Controller and @ResponseBody.

@RestControllerAdvice

This annotation is applied to Java classes. @RestControllerAdvice is a convenience annotation that combines @ControllerAdvice and @ResponseBody. This annotation is used along with the @ExceptionHandler annotation to handle exceptions that occur within the controller.

@SessionAttribute

This annotation is used at method parameter level. The @SessionAttribute annotation is used to bind the method parameter to a session attribute. This annotation provides a convenient access to the existing or permanent session attributes.

@SessionAttributes

This annotation is applied at the type level for a specific handler. The @SessionAtrributes annotation is used when you want to add a JavaBean object into a session. This is used when you want to keep the object in session for short lived. @SessionAttributes is used in conjunction with @ModelAttribute.

Consider this example:

1
2
3
4
5
6
7
8
@ModelAttribute("person")
public Person getPerson() {}
// within the same controller as above snippet
@Controller
@SeesionAttributes(value = "person", types = {
Person.class
})
public class PersonController {}

The @ModelAttribute name is assigned to the @SessionAttributes as a value. The @SessionAttributes has two elements. The value element is the name of the session in the model and the types element is the type of session attributes in the model.

Spring Cloud Annotations

@EnableConfigServer

This annotation is used at the class level. When developing a project with a number of services, you need to have a centralized and straightforward manner to configure and retrieve the configurations of all the services that you are going to develop. One advantage of using a centralized config server is that you don’t need to carry the burden of remembering where each configuration is distributed across multiple and distributed components.

You can use Spring Cloud’s @EnableConfigServer annotation to start a config server that the other applications can talk to.

@EnableEurekaServer

This annotation is applied to Java classes. One problem that you may encounter while decomposing your application into microservices is that it becomes difficult for every service to know the address of every other service it depends on. There comes the discovery service which is responsible for tracking the locations of all other microservices.

Netflix’s Eureka is an implementation of a discovery server and integration is provided by Spring Boot. Spring Boot has made it easy to design a Eureka Server by just annotating the entry class with @EnableEurekaServer.

@EnableDiscoveryClient

This annotation is applied to Java classes. In order to tell any application to register itself with Eureka, you just need to add the @EnableDiscoveryClientannotation to the application entry point. The application that’s now registered with Eureka uses the Spring Cloud Discovery Client abstraction to interrogate the registry for its own host and port.

@EnableCircuitBreaker

This annotation is applied to Java classes that can act as the circuit breaker. The circuit breaker pattern can allow a microservice continue working when a related service fails, preventing the failure from cascading. This also gives the failed service a time to recover.

The class annotated with @EnableCircuitBreaker will monitor, open, and close the circuit breaker.

@HystrixCommand

This annotation is used at the method level. Netflix’s Hystrix library provides the implementation of a Circuit Breaker pattern. When you apply the circuit breaker to a method, Hystrix watches for the failures of the method. Once failures build up to a threshold, Hystrix opens the circuit so that the subsequent calls also fail. Now Hystrix redirects calls to the method, and they are passed to the specified fallback methods.

Hystrix looks for any method annotated with the @HystrixCommand annotation and wraps it into a proxy connected to a circuit breaker so that Hystrix can monitor it.

Consider the following example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Service
public class BookService {
private final RestTemplate restTemplate;
public BookService(RestTemplate rest) {
this.restTemplate = rest;
}
@HystrixCommand(fallbackMethod = "newList") public String bookList() {
URI uri = URI.create("http://localhost:8081/recommended");
return this.restTemplate.getForObject(uri, String.class);
}
public String newList() {
return "Cloud native Java";
}
}

Here @HystrixCommand is applied to the original method bookList(). The @HystrixCommand annotation has newList as the fallback method. So for some reason, if Hystrix opens the circuit on bookList(), you will have a placeholder book list ready for the users.

Spring Framework DataAccess Annotations

@Transactional

This annotation is placed before an interface definition, a method on an interface, a class definition, or a public method on a class. The mere presence of @Transactional is not enough to activate the transactional behavior. The @Transactional is simply metadata that can be consumed by some runtime infrastructure. This infrastructure uses the metadata to configure the appropriate beans with transactional behavior.

The annotation further supports configuration like:

  • The Propagation type of the transaction
  • The Isolation level of the transaction
  • A timeout for the operation wrapped by the transaction
  • A read-only flag — a hint for the persistence provider that the transaction must be read onlyThe rollback rules for the transaction

Cache-Based Annotations

@Cacheable

This annotation is used on methods. The simplest way of enabling the cache behavior for a method is to annotate it with @Cacheable and parameterize it with the name of the cache where the results would be stored.

1
2
@Cacheable("addresses")
public String getAddress(Book book){...}

In the snippet above, the method getAddress is associated with the cache named addresses. Each time the method is called, the cache is checked to see whether the invocation has been already executed and does not have to be repeated.

@CachePut

This annotation is used on methods. Whenever you need to update the cache without interfering the method execution, you can use the @CachePut annotation. That is, the method will always be executed and the result cached.

1
2
@CachePut("addresses")
public String getAddress(Book book){...}

Using @CachePut and @Cacheable on the same method is strongly discouraged, as the former forces the execution in order to execute a cache update, the latter causes the method execution to be skipped by using the cache.

@CacheEvict

This annotation is used on methods. It is not that you always want to populate the cache with more and more data. Sometimes, you may want to remove some cache data so that you can populate the cache with some fresh values. In such a case, use the @CacheEvict annotation.

1
2
@CacheEvict(value="addresses", allEntries="true")
public String getAddress(Book book){...}

Here, an additional element, allEntries, is used along with the cache name to be emptied. It is set to true so that it clears all values and prepares to hold new data.

@CacheConfig

This annotation is a class level annotation. The @CacheConfig annotation helps to streamline some of the cache information at one place. Placing this annotation on a class does not turn on any caching operation. This allows you to store the cache configuration at the class level so that you don’t have to declare things multiple times.

Task Execution and Scheduling Annotations

@Scheduled

This annotation is a method-level annotation. The @Scheduled annotation is used on methods along with the trigger metadata. A method with @Scheduled should have a void return type and should not accept any parameters.

There are different ways of using the @Scheduled annotation:

1
2
3
4
@Scheduled(fixedDelay=5000)
public void doSomething() {
// something that should execute periodically
}

In this case, the duration between the end of the last execution and the start of the next execution is fixed. The tasks always wait until the previous one is finished.

1
2
3
4
@Scheduled(fixedRate=5000)
public void doSomething() {
// something that should execute periodically
}

In this case, the beginning of the task execution does not wait for the completion of the previous execution.

1
2
3
4
@Scheduled(initialDelay=1000,fixedRate=5000)
public void doSomething() {
// something that should execute periodically after an initial delay
}

The task gets executed initially with a delay and then continues with the specified fixed rate.

@Async

This annotation is used on methods to execute each method in a separate thread. The @Async annotation is provided on a method so that the invocation of that method will occur asynchronously. Unlike methods annotated with @Scheduled, the methods annotated with @Async can take arguments. They will be invoked in the normal way by callers at runtime rather than by a scheduled task.

@Async can be used with both void return type methods and methods that return a value. However, methods with return values must have a Future-typed return value.

Spring Framework Testing Annotations

@BootstrapWith

This annotation is a class-level annotation. The @BootstrapWith annotation is used to configure how the Spring TestContext Framework is bootstrapped. This annotation is used as a metadata to create custom composed annotations and reduce the configuration duplication in a test suite.

@ContextConfiguration

This annotation is a class level annotation that defines a metadata used to determine which configuration files to use to the load the ApplicationContext for your test. More specifically @ContextConfiguration declares the annotated classes that will be used to load the context. You can also tell Spring where to locate the file.

1
@ContextConfiguration(locations={"example/test-context.xml", loader = Custom ContextLoader.class})

@WebAppConfiguration

This annotation is a class level annotation. The @WebAppConfiguration is used to declare that the ApplicationContext loaded for an integration test should be a WebApplicationContext. This annotation is used to create the web version of the application context. It is important to note that this annotation must be used with the @ContextConfiguration annotation. The default path to the root of the web application is src/main/webapp. You can override it by passing a different path to the @WebAppConfiguration.

@Timed

This annotation is used on methods. The @Timed annotation indicates that the annotated test method must finish its execution at the specified time period (in milliseconds). If the execution exceeds the specified time in the annotation, the test fails.

1
2
@Timed(millis=10000)
public void testLongRunningProcess() { ... }

In this example, the test will fail if it exceeds 10 seconds of execution.

@Repeat

This annotation is used on test methods. If you want to run a test method several times in a row automatically, you can use the @Repeat annotation. The number of times that test method is to be executed is specified in the annotation.

1
2
3
@Repeat(10)
@Test
public void testProcessRepeatedly() { ... }

In this example, the test will be executed 10 times.

@Commit

This annotation can be used as both class-level or method-level annotation. After execution of a test method, the transaction of the transactional test method can be committed using the @Commit annotation. This annotation explicitly conveys the intent of the code. When used at the class level, this annotation defines the commit for all test methods within the class. When declared as a method level annotation, @Commit specifies the commit for specific test methods overriding the class level commit.

@RollBack

This annotation can be used as both class-level and method-level annotation. The @RollBack annotation indicates whether the transaction of a transactional test method must be rolled back after the test completes its execution. If this true, @Rollback(true), the transaction is rolled back. Otherwise, the transaction is committed. @Commit is used instead of @RollBack(false).

When used at the class level, this annotation defines the rollback for all test methods within the class.

When declared as a method level annotation, @RollBack specifies the rollback for specific test methods overriding the class level rollback semantics.

@DirtiesContext

This annotation is used as both class-level and method-level annotation. @DirtiesContext indicates that the Spring ApplicationContext has been modified or corrupted in some manner and it should be closed. This will trigger the context reloading before execution of next test. The ApplicationContext is marked as dirty before or after any such annotated method as well as before or after current test class.

The @DirtiesContext annotation supports BEFORE_METHOD, BEFORE_CLASS, and BEFORE_EACH_TEST_METHOD modes for closing the ApplicationContext before a test.

NOTE: Avoid overusing this annotation. It is an expensive operation and if abused, it can really slow down your test suite.

@BeforeTransaction

This annotation is used to annotate void methods in the test class. @BeforeTransaction annotated methods indicate that they should be executed before any transaction starts executing. That means the method annotated with @BeforeTransaction must be executed before any method annotated with @Transactional.

@AfterTransaction

This annotation is used to annotate void methods in the test class. @AfterTransaction annotated methods indicate that they should be executed after a transaction ends for test methods. That means the method annotated with @AfterTransaction must be executed after the method annotated with @Transactional.

@Sql

This annotation can be declared on a test class or test method to run SQL scripts against a database. The @Sql annotation configures the resource path to SQL scripts that should be executed against a given database either before or after an integration test method. When @Sql is used at the method level it, will override any @Sqldefined in at class level.

@SqlConfig

This annotation is used along with the @Sql annotation. The @SqlConfig annotation defines the metadata that is used to determine how to parse and execute SQL scripts configured via the @Sql annotation. When used at the class level, this annotation serves as global configuration for all SQL scripts within the test class. But when used directly with the config attribute of @Sql, @SqlConfig serves as a local configuration for SQL scripts declared.

@SqlGroup

This annotation is used on methods. The @SqlGroup annotation is a container annotation that can hold several@Sql annotations. This annotation can declare nested @Sql annotations.
In addition, @SqlGroup is used as a meta-annotation to create custom composed annotations. This annotation can also be used along with repeatable annotations, where @Sql can be declared several times on the same method or class.

@SpringBootTest

This annotation is used to start the Spring context for integration tests. This will bring up the full autoconfigruation context.

@DataJpaTest

The @DataJpaTest annotation will only provide the autoconfiguration required to test Spring Data JPA using an in-memory database such as H2.

This annotation is used instead of @SpringBootTest

@DataMongoTest

The @DataMongoTest will provide a minimal autoconfiguration and an embedded MongoDB for running integration tests with Spring Data MongoDB.

@WebMVCTest

The @WebMVCTest will bring up a mock servlet context for testing the MVC layer. Services and components are not loaded into the context. To provide these dependencies for testing, the @MockBean annotation is typically used.

@AutoConfigureMockMVC

The @AutoConfigureMockMVC annotation works very similarly to the @WebMVCTest annotation, but the full Spring Boot context is started.

@MockBean

Creates and injects a Mockito Mock for the given dependency.

@JsonTest

Will limit the auto-configuration of Spring Boot to components relevant to processing JSON.

This annotation will also autoconfigure an instance of JacksonTester or GsonTester.

@TestPropertySource

Class level annotation used to specify property sources for the test class.

git-cheatsheet

Posted on 2018-04-19 | In git
  • GIT INSTALLATION

For GNU/Linux distributions Git should be available in the standard system repository. For example in Debian/Ubuntu please type in the terminal:

1
$ sudo apt-get install git

If you want or need to install Git from source, you can get it from https://git-scm.com/downloads.
An excellent Git course can be found in the great Pro Git book by Scott Chacon and Ben Straub. The book is available online for free at https://git-scm.com/book.

  • GIT CONFIGURATION
1
$ git config --global user.name "Your Name"

Set the name that will be attached to your commits and tags.

1
$ git config --global user.email "you@example.com"

Set the e-mail address that will be attached to your commits and tags.

1
$ git config --global color.ui auto

Enable some colorization of Git output.

  • STARTING A PROJECT
1
$ git init [project name]

Create new local repository. If [project name] is provided, Git will createa new directory named [project name] and will initialize a repository inside it.If [project name] is not provided, then a new repository is initialized in currentdirectory.

1
$ git clone [project url]

Downloads a project with entire history from the remote repository.

1
$ git remote set-url subrepo/common ssh://git@code.gongyuanhezi.cn:8001/iot/common.git

Change remote url

  • IGNORING FILES
1
2
3
4
5
$ cat .gitignore
/logs/*
!logs/.gitkeep
/tmp
*.swp

Thanks to this file Git will ignore all files in logs directory (excludingthe .gitkeep file), whole tmp directory and all files *.swp. Described fileignoring will work for the directory (and children directories) where .gitignorefile is placed.

  • DAY-TO-DAY WORK
    1
    $ git status

See the status of your work. New, staged, modified files. Current branch.

1
$ git diff [file]

Show changes between working directory and staging area.

1
$ git diff --staged [file]

Shows changes in the staging area that haven’t been commited.

1
$ git checkout -- [file]

Discard changes in working directory. This operation is unrecoverable.

1
$ git add [file]

Add a file to the staging area. Use . instead of full file path, to add allchanges files from current directory down into directory tree.

1
$ git reset [file]

Get file back from staging area to working directory.

1
$ git commit [-m "message here"]

Create new commit from changes added to the staging area. Commit musthave a message! You can provide it by -m. Otherways $EDITOR will be opened.

1
$ git rm [file]

Remove file from working directory and add deletion to staging area.

1
$ git stash

Put your current changes into stash.

1
$ git stash pop

Apply stored stash content into working directory, and clear stash.

1
$ git stash drop

Clear stash without applying it into working directory.

  • GIT BRANCHING MODEL
1
$ git branch [-a]

List all local branches in repository. With -a: show all branches (with remote).

1
$ git branch [name]

Create new branch, referencing the current HEAD.

1
$ git checkout [-b] [name]

Switch working directory to the specified branch. With -b: Git will create thespecified branch if it does not exist.

1
2
3
4
5
6
$ git merge [from name]
$ git merge -X theirs develop // merge from develop, if conflicted, follow develop

git merge -X ours origin/protocolExtend20180409

git merge --abort

Join specified [from name] branch into your current branch (the one you areon currenlty).

1
$ git branch -d [name]

Remove selected branch, if it is already merged into any other. -D instead of-d forces deletion.

  • REVIEW YOUR WORK
1
$ git log [-n count]

List commit history of current branch. -n count limits list to last n commits.

1
$ git log --oneline --graph --decorate

An overview with references labels and history graph. One commit per line.

1
$ git log ref..

List commits that are present on current branch and not merged into ref.A ref can be e.g. a branch name or a tag name.

1
$ git log ..ref

List commit, that are present on ref and not merged into current branch.

1
$ git reflog

List operations (like checkouts, commits etc.) made on local repository.

  • TAGGING KNOWN COMMITS
1
$ git tag

List all tags.

1
$ git tag [name][commit sha]

Create a tag object named name for current commit.Add commit sha to tag a specific commit instead of current one.

1
$ git tag -a [name] [commit sha]

Create a tag object named name for current commit.

1
git tag -d [name]

Remove a tag from a local repository.

1
$ git push origin master && git push --tags

Push commit to origin master and push all tags

  • REVERTING CHANGES
1
$ git reset [--hard] [target reference]

Switch current branch to the target reference, and leaves a difference as anuncommited changes. When –hard is used, all changes are discarded.

1
$ git revert [commit sha]
  • SYNCHRONIZING REPOSITORIES
1
2
3
$ git fetch [remote]

git fetch -p

Fetch changes from the remote, but not update tracking branches.

1
$ git fetch --prune [remote]

Remove remote refs, that were removed from the remote repository.

1
2
3
$ git pull [remote]
$ git pull --all // pull all branch from remote
git pull --rebase

Fetch changes from the remote and merge current branch with its upstream.

1
$ git push [--tags] [remote]

Push local changes to the remote. Use –tags to push tags.

1
$ git push -u [remote] [branch]

Push local branch to remote repository. Set its copy as an upstream.

  • git subrepo
1
$ git subrepo status

Show subrepo status

1
git subrepo init common -r ssh://git@code.gongyuanhezi.cn:8001/iot/common.git

Init remote repo common as subrepo

1
git subrepo pull common
1
git subrepo push common
command explain
commit an object
branch a reference to a commit; can have a tracked upstream
tag a reference (standard) or an object (annotated)
HEAD a place where your working directory is now

linux-dmesg

Posted on 2018-04-19 | In linux

The ‘dmesg‘ command displays the messages from the kernel ring buffer. A system passes multiple runlevel from where we can get lot of information like system architecture, cpu, attached device, RAM etc. When computer boots up, a kernel (core of an operating system) is loaded into memory. During that period number of messages are being displayed where we can see hardware devices detected by kernel.

The messages are very important in terms of diagnosing purpose in case of device failure. When we connect or disconnect hardware device on the system, with the help of dmesg command we come to know detected or disconnected information on the fly. The dmesg command is available on most Linux and Unix based Operating System.

Let’s throw some light on most famous tool called ‘dmesg’ command with their practical examples as discussed below. The exact syntax of dmesg as follows.

1
# dmseg [options...]

1. List all loaded Drivers in Kernel

We can use text-manipulation tools i.e. ‘more‘, ‘tail‘, ‘less‘ or ‘grep‘ with dmesg command. As output of dmesg log won’t fit on a single page, using dmesg with pipe more or less command will display logs in a single page.

1
2
[root@tecmint.com ~]# dmesg | more
[root@tecmint.com ~]# dmesg | less
Sample Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[    0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] Linux version 3.11.0-13-generic (buildd@aatxe) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu8) ) #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013
(Ubuntu 3.11.0-13.20-generic 3.11.6)
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] NSC Geode by NSC
[ 0.000000] Cyrix CyrixInstead
[ 0.000000] Centaur CentaurHauls
[ 0.000000] Transmeta GenuineTMx86
[ 0.000000] Transmeta TransmetaCPU
[ 0.000000] UMC UMC UMC UMC
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007dc08bff] usable
[ 0.000000] BIOS-e820: [mem 0x000000007dc08c00-0x000000007dc5cbff] ACPI NVS
[ 0.000000] BIOS-e820: [mem 0x000000007dc5cc00-0x000000007dc5ebff] ACPI data
[ 0.000000] BIOS-e820: [mem 0x000000007dc5ec00-0x000000007fffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fed003ff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed9ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000ffb00000-0x00000000ffffffff] reserved
[ 0.000000] NX (Execute Disable) protection: active
.....

Read Also: Manage Linux Files Effectively using commands head, tail and cat

2. List all Detected Devices

To discover which hard disks has been detected by kernel, you can search for the keyword “sda” along with “grep” like shown below.

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@tecmint.com ~]# dmesg | grep sda
[ 1.280971] sd 2:0:0:0: [sda] 488281250 512-byte logical blocks: (250 GB/232 GiB)
[ 1.281014] sd 2:0:0:0: [sda] Write Protect is off
[ 1.281016] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 1.281039] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1.359585] sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
[ 1.360052] sd 2:0:0:0: [sda] Attached SCSI disk
[ 2.347887] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[ 22.928440] Adding 3905532k swap on /dev/sda6. Priority:-1 extents:1 across:3905532k FS
[ 23.950543] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro
[ 24.134016] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[ 24.330762] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null)
[ 24.561015] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null)

NOTE: The ‘sda’ first SATA hard drive, ‘sdb’ is the second SATA hard drive and so on. Search with ‘hda’ or ‘hdb’ in the case of IDE hard drive.

3. Print Only First 20 Lines of Output

The ‘head’ along with dmesg will show starting lines i.e. ‘dmesg | head -20’ will print only 20 lines from the starting point.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@tecmint.com ~]# dmesg | head  -20
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] Linux version 3.11.0-13-generic (buildd@aatxe) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu8) ) #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 (Ubuntu 3.11.0-13.20-generic 3.11.6)
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] NSC Geode by NSC
[ 0.000000] Cyrix CyrixInstead
[ 0.000000] Centaur CentaurHauls
[ 0.000000] Transmeta GenuineTMx86
[ 0.000000] Transmeta TransmetaCPU
[ 0.000000] UMC UMC UMC UMC
[ 0.000000] e820: BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007dc08bff] usable
[ 0.000000] BIOS-e820: [mem 0x000000007dc08c00-0x000000007dc5cbff] ACPI NVS
[ 0.000000] BIOS-e820: [mem 0x000000007dc5cc00-0x000000007dc5ebff] ACPI data
[ 0.000000] BIOS-e820: [mem 0x000000007dc5ec00-0x000000007fffffff] reserved

4. Print Only Last 20 Lines of Output

The ‘tail’ along with dmesg command will print only 20 last lines, this is useful in case we insert removable device.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@tecmint.com ~]# dmesg | tail -20
parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
ppdev: user-space parallel port driver
EXT4-fs (sda1): mounted filesystem with ordered data mode
Adding 2097144k swap on /dev/sda2. Priority:-1 extents:1 across:2097144k
readahead-disable-service: delaying service auditd
ip_tables: (C) 2000-2006 Netfilter Core Team
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Slow work thread pool: Starting up
Slow work thread pool: Ready
FS-Cache: Loaded
CacheFiles: Loaded
CacheFiles: Security denies permission to nominate security context: error -95
eth0: no IPv6 routers present
type=1305 audit(1398268784.593:18630): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 res=1
readahead-collector: starting delayed service auditd
readahead-collector: sorting
readahead-collector: finished

5. Search Detected Device or Particular String

It’s difficult to search particular string due to length of dmesg output. So, filter the lines with are having string like ‘usb‘ ‘dma‘ ‘tty‘ and ‘memory‘ etc. The ‘-i’ option instruct to grep command to ignore the case (upper or lower case letters).

1
2
3
4
[root@tecmint.com log]# dmesg | grep -i usb
[root@tecmint.com log]# dmesg | grep -i dma
[root@tecmint.com log]# dmesg | grep -i tty
[root@tecmint.com log]# dmesg | grep -i memory
Sample Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[    0.000000] Scanning 1 areas for low memory corruption
[ 0.000000] initial memory mapped: [mem 0x00000000-0x01ffffff]
[ 0.000000] Base memory trampoline at [c009b000] 9b000 size 16384
[ 0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[ 0.000000] init_memory_mapping: [mem 0x37800000-0x379fffff]
[ 0.000000] init_memory_mapping: [mem 0x34000000-0x377fffff]
[ 0.000000] init_memory_mapping: [mem 0x00100000-0x33ffffff]
[ 0.000000] init_memory_mapping: [mem 0x37a00000-0x37bfdfff]
[ 0.000000] Early memory node ranges
[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x000effff]
[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[ 0.000000] Memory: 2003288K/2059928K available (6352K kernel code, 607K rwdata, 2640K rodata, 880K init, 908K bss, 56640K reserved, 1146920K highmem)
[ 0.000000] virtual kernel memory layout:
[ 0.004291] Initializing cgroup subsys memory
[ 0.004609] Freeing SMP alternatives memory: 28K (c1a3e000 - c1a45000)
[ 0.899622] Freeing initrd memory: 23616K (f51d0000 - f68e0000)
[ 0.899813] Scanning for low memory corruption every 60 seconds
[ 0.946323] agpgart-intel 0000:00:00.0: detected 32768K stolen memory
[ 1.360318] Freeing unused kernel memory: 880K (c1962000 - c1a3e000)
[ 1.429066] [drm] Memory usable by graphics device = 2048M

6. Clear dmesg Buffer Logs

Yes, we can clear dmesg logs if required with below command. It will clear dmesg ring buffer message logs till you executed the command below. Still you can view logs stored in ‘/var/log/dmesg‘ files. If you connect any device will generate dmesg output.

1
[root@tecmint.com log]# dmesg -c

7. Monitoring dmesg in Real Time

Some distro allows command ‘tail -f /var/log/dmesg’ as well for real time dmesg monitoring.

1
[root@tecmint.com log]# watch "dmesg | tail -20"

Conclusion: The dmesg command is useful as dmesg records all the system changes done or occur in real time. As always you can man dmesg to get more information.

linux-ps

Posted on 2018-04-19 | In linux

ps (processes status) is a native Unix/Linux utility for viewing information concerning a selection of running processes on a system: it reads this information from the virtual files in /proc filesystem. It is one of the important utilities for system administration specifically under process monitoring, to help you understand whats is going on a Linux system.

It has numerous options for manipulating its output, however you’ll find a small number of them practically useful for daily usage.

In this article, we’ll look at 30 useful examples of ps commands for monitoring active running processes on a Linux system.

Note that ps produces output with a heading line, which represents the meaning of each column of information, you can find the meaning of all the labels in the ps man page.

List All Processes in Current Shell

1. If you run ps command without any arguments, it displays processes for the current shell.

1
$ ps

List Current Running Processes

Print All Processes in Different Formats

2. Display every active process on a Linux system in generic (Unix/Linux) format.

1
2
3
$ ps -A
OR
$ ps -e

List Processes in Standard Format

3. Display all processes in BSD format.

1
2
3
$ ps au
OR
$ ps axu

List Processes in BSD Format

4. To perform a full-format listing, add the -f or -F flag.

1
2
3
$ ps -ef
OR
$ ps -eF

List Processes in Long List Format

Display User Running Processes

5. You can select all processes owned by you (runner of the ps command, root in this case), type:

1
$ ps -x

6. To display a user’s processes by real user ID (RUID) or name, use the -U flag.

1
2
3
$ ps -fU tecmint
OR
$ ps -fu 1000

List User Processes by ID

7. To select a user’s processes by effective user ID (EUID) or name, use the -u option.

1
2
3
$ ps -fu tecmint
OR
$ ps -fu 1000

Print All Processes Running as Root (Real and Effecitve ID)

8. The command below enables you to view every process running with root user privileges (real & effective ID) in user format.

1
$ ps -U root -u root

Display Root User Running Processes

Display Group Processes

9. If you want to list all processes owned by a certain group (real group ID (RGID) or name), type.

1
2
3
$ ps -fG apache
OR
$ ps -fG 48

Display Group Processes

10. To list all processes owned by effective group name (or session), type.

1
$ ps -fg apache

Display Processes by PID and PPID

11. You can list processes by PID as follows.

1
$ ps -fp 1178

List Processes by PID

12. To select process by PPID, type.

1
$ ps -f --ppid 1154

List Process by PPID

13. Make selection using PID list.

1
$ ps -fp 2226,1154,1146

List Processes by PIDs

Display Processes by TTY

14. To select processes by tty, use the -t flag as follows.

1
2
3
$ ps -t pst/0
$ ps -t pst/1
$ ps -ft tty1

List Processes by TTY

Print Process Tree

15. A process tree shows how processes on the system are linked to each other; processes whose parents have been killed are adopted by the init (or systemd).

1
$ ps -e --forest

List Process Tree

16. You can also print a process tree for a given process like this.

1
2
3
$ ps -f --forest -C sshd
OR
$ ps -ef --forest | grep -v grep | grep sshd

List Tree View of Process

Print Process Threads

17. To print all threads of a process, use the -H flag, this will show the LWP (light weight process) as well as NLWP (number of light weight process) columns.

1
$ ps -fL -C httpd

List Process Threads

Specify Custom Output Format

Using the -o or –format options, ps allows you to build user-defined output formats as shown below.

18. To list all format specifiers, include the L flag.

1
$ ps L

19. The command below allows you to view the PID, PPID, user name and command of a process.

1
$ ps -eo pid,ppid,user,cmd

List Processes with Names

20. Below is another example of a custom output format showing file system group, nice value, start time and elapsed time of a process.

1
$ ps -p 1154 -o pid,ppid,fgroup,ni,lstart,etime

List Process ID Information

21. To find a process name using its PID.

1
$ ps -p 1154 -o comm=

Find Process using PID

Display Parent and Child Processes

22. To select a specific process by its name, use the -C flag, this will also display all its child processes.

1
$ ps -C sshd

Find Parent Child Process

23. Find all PIDs of all instances of a process, useful when writing scripts that need to read PIDs from a std output or file.

1
$ ps -C httpd -o pid=

Find All Process PIDs

24. Check execution time of a process.

1
$ ps -eo comm,etime,user | grep httpd

The output below shows the HTTPD service has been running for 1 hours, 48 minutes and 17 seconds.

Find Process Uptime

Troubleshoot Linux System Performance

If your system isn’t working as it should be, for instance if it’s unusually slow, you can perform some system troubleshooting as follows.

26. Find top running processes by highest memory and CPU usage in Linux.

1
2
3
$ ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head
OR
$ ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head

Find Top Running Processes

27. To kill an Linux processes/unresponsive applications or any process that is consuming high CPU time.

First, find the PID of the unresponsive process or application.

1
$ ps -A | grep -i stress

Then use the kill command to terminate it immediately.

1
$ kill -9 2583 2584

Find and Kill a Process

Print Security Information

28. Show security context (specifically for SELinux) like this.

1
2
3
$ ps -eM
OR
$ ps --context

Find SELinux Context

29. You can also display security information in user-defined format with this command.

1
$ ps -eo  euser,ruser,suser,fuser,f,comm,label

List SELinux Context by Users

Perform Real-time Process Monitoring Using Watch Utility

30. Finally, since ps displays static information, you can employ the watch utility to perform real-time process monitoring with repetitive output, displayed after every second as in the command below (specify a custom ps command to achieve your objective).

1
$ watch -n 1 'ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head'

Real Time Process Monitoring

Important: ps only shows static information, to view frequently updated output you can use tools such as htop; top and glances: the last two are in fact Linux system performance monitoring tool.

linux-find

Posted on 2018-04-19 | In linux

The Linux Find Command is one of the most important and much used command in Linux sytems. Find command used to search and locate list of files and directories based on conditions you specify for files that match the arguments. Find can be used in variety of conditions like you can find files by permissions, users, groups, file type, date, size and other possible criteria.

Through this article we are sharing our day-to-day Linux find command experience and its usage in the form of examples. In this article we will show you the most used 35 Find Commands examples in Linux. We have divided the section into Five parts from basic to advance usage of find command.

  1. Part I: Basic Find Commands for Finding Files with Names
  2. Part II: Find Files Based on their Permissions
  3. Part III: Search Files Based On Owners and Groups
  4. Part IV: Find Files and Directories Based on Date and Time
  5. Part V: Find Files and Directories Based on Size
  6. Part VI: Find Multiple Filenames in Linux

Part I – Basic Find Commands for Finding Files with Names

1. Find Files Using Name in Current Directory

Find all the files whose name is tecmint.txt in a current working directory.

1
2
# find . -name tecmint.txt
./tecmint.txt

2. Find Files Under Home Directory

Find all the files under /home directory with name tecmint.txt.

1
2
# find /home -name tecmint.txt
/home/tecmint.txt

3. Find Files Using Name and Ignoring Case

Find all the files whose name is tecmint.txt and contains both capital and small letters in /home directory.

1
2
3
# find /home -iname tecmint.txt
./tecmint.txt
./Tecmint.txt

4. Find Directories Using Name

Find all directories whose name is Tecmint in / directory.

1
2
# find / -type d -name Tecmint
/Tecmint

5. Find PHP Files Using Name

Find all php files whose name is tecmint.php in a current working directory.

1
2
# find . -type f -name tecmint.php
./tecmint.php

6. Find all PHP Files in Directory

Find all php files in a directory.

1
2
3
4
# find . -type f -name "*.php"
./tecmint.php
./login.php
./index.php

Part II – Find Files Based on their Permissions

7. Find Files With 777 Permissions

Find all the files whose permissions are 777.

1
# find . -type f -perm 0777 -print

8. Find Files Without 777 Permissions

Find all the files without permission 777.

1
# find / -type f ! -perm 777

9. Find SGID Files with 644 Permissions

Find all the SGID bit files whose permissions set to 644.

1
# find / -perm 2644

10. Find Sticky Bit Files with 551 Permissions

Find all the Sticky Bit set files whose permission are 551.

1
# find / -perm 1551

11. Find SUID Files

Find all SUID set files.

1
# find / -perm /u=s

12. Find SGID Files

Find all SGID set files.

1
# find / -perm /g=s

13. Find Read Only Files

Find all Read Only files.

1
# find / -perm /u=r

14. Find Executable Files

Find all Executable files.

1
# find / -perm /a=x

15. Find Files with 777 Permissions and Chmod to 644

Find all 777 permission files and use chmod command to set permissions to 644.

1
# find / -type f -perm 0777 -print -exec chmod 644 {} \;

16. Find Directories with 777 Permissions and Chmod to 755

Find all 777 permission directories and use chmod command to set permissions to 755.

1
# find / -type d -perm 777 -print -exec chmod 755 {} \;

17. Find and remove single File

To find a single file called tecmint.txt and remove it.

1
# find . -type f -name "tecmint.txt" -exec rm -f {} \;

18. Find and remove Multiple File

To find and remove multiple files such as .mp3 or .txt, then use.

1
2
3
# find . -type f -name "*.txt" -exec rm -f {} \;
OR
# find . -type f -name "*.mp3" -exec rm -f {} \;

19. Find all Empty Files

To find all empty files under certain path.

1
# find /tmp -type f -empty

20. Find all Empty Directories

To file all empty directories under certain path.

1
# find /tmp -type d -empty

21. File all Hidden Files

To find all hidden files, use below command.

1
# find /tmp -type f -name ".*"

Part III – Search Files Based On Owners and Groups

22. Find Single File Based on User

To find all or single file called tecmint.txt under / root directory of owner root.

1
# find / -user root -name tecmint.txt

23. Find all Files Based on User

To find all files that belongs to user Tecmint under /home directory.

1
# find /home -user tecmint

24. Find all Files Based on Group

To find all files that belongs to group Developer under /home directory.

1
# find /home -group developer

25. Find Particular Files of User

To find all .txt files of user Tecmint under /home directory.

1
# find /home -user tecmint -iname "*.txt"

Part IV – Find Files and Directories Based on Date and Time

26. Find Last 50 Days Modified Files

To find all the files which are modified 50 days back.

1
# find / -mtime 50

27. Find Last 50 Days Accessed Files

To find all the files which are accessed 50 days back.

1
# find / -atime 50

28. Find Last 50-100 Days Modified Files

To find all the files which are modified more than 50 days back and less than 100 days.

1
# find / -mtime +50 –mtime -100

29. Find Changed Files in Last 1 Hour

To find all the files which are changed in last 1 hour.

1
# find / -cmin -60

30. Find Modified Files in Last 1 Hour

To find all the files which are modified in last 1 hour.

1
# find / -mmin -60

31. Find Accessed Files in Last 1 Hour

To find all the files which are accessed in last 1 hour.

1
# find / -amin -60

Part V – Find Files and Directories Based on Size

32. Find 50MB Files

To find all 50MB files, use.

1
# find / -size 50M

33. Find Size between 50MB – 100MB

To find all the files which are greater than 50MB and less than 100MB.

1
# find / -size +50M -size -100M

34. Find and Delete 100MB Files

To find all 100MB files and delete them using one single command.

1
# find / -size +100M -exec rm -rf {} \;

35. Find Specific Files and Delete

Find all .mp3 files with more than 10MB and delete them using one single command.

1
# find / -type f -name *.mp3 -size +10M -exec rm {} \;

That’s it, We are ending this post here, In our next article we will discuss more about other Linux commands in depth with practical examples. Let us know your opinions on this article using our comment section.

Many times, we are locked in a situation where we have to search for multiple files with different extensions, this has probably happened to several Linux users especially from within the terminal.

There are several Linux utilities that we can use to locate or find files on the file system, but finding multiple filenames or files with different extensions can sometimes prove tricky and requires specific commands.

Find Multiple File Names in Linux

Find Multiple File Names in Linux

One of the many utilities for locating files on a Linux file system is the find utility and in this how-to guide, we shall walk through a few examples of using find to help us locate multiple filenames at once.

Before we dive into the actual commands, let us look at a brief introduction to the Linux find utility.

The simplest and general syntax of the find utility is as follows:

1
# find directory options [ expression ]

Let us proceed to look at some examples of find command in Linux.

1. Assuming that you want to find all files in the current directory with .sh and .txt file extensions, you can do this by running the command below:

1
# find . -type f \( -name "*.sh" -o -name "*.txt" \)

Find .sh and .txt Extension Files in Linux

Find .sh and .txt Extension Files in Linux

Interpretation of the command above:

  1. . means the current directory
  2. -type option is used to specify file type and here, we are searching for regular files as represented by f
  3. -name option is used to specify a search pattern in this case, the file extensions
  4. -o means “OR”

It is recommended that you enclose the file extensions in a bracket, and also use the \ ( back slash) escape character as in the command.

2. To find three filenames with .sh, .txt and .c extensions, issues the command below:

1
# find . -type f \( -name "*.sh" -o -name "*.txt" -o -name "*.c" \)

Find Multiple File Extensions in Linux

Find Multiple File Extensions in Linux

3. Here is another example where we search for files with .png, .jpg, .deb and .pdf extensions:

1
# find /home/aaronkilik/Documents/ -type f \( -name "*.png" -o -name "*.jpg" -o -name "*.deb" -o -name ".pdf" \)

Find More than 3 File Extensions in Linux

Find More than 3 File Extensions in Linux

When you critically observe all the commands above, the little trick is using the -o option in the find command, it enables you to add more filenames to the search array, and also knowing the filenames or file extensions you are searching for.

netstat

Posted on 2018-04-19 | In linux

netstat (network statistics) is a command line tool for monitoring network connections both incoming and outgoing as well as viewing routing tables, interface statistics etc. netstat is available on all Unix-like Operating Systems and also available on Windows OS as well. It is very useful in terms of network troubleshooting and performance measurement. netstat is one of the most basic network service debugging tools, telling you what ports are open and whether any programs are listening on ports.

This tool is very important and much useful for Linux network administrators as well as system administrators to monitor and troubleshoot their network related problems and determine network traffic performance. This article shows usages of netstat command with their examples which may be useful in daily operation.

1. Listing all the LISTENING Ports of TCP and UDP connections

Listing all ports (both TCP and UDP) using netstat -a option.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# netstat -a | more
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 52 192.168.0.2:ssh 192.168.0.1:egs ESTABLISHED
tcp 1 0 192.168.0.2:59292 www.gov.com:http CLOSE_WAIT
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 0 *:59482 *:* LISTEN
udp 0 0 *:35036 *:*
udp 0 0 *:npmp-local *:*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 16972 /tmp/orbit-root/linc-76b-0-6fa08790553d6
unix 2 [ ACC ] STREAM LISTENING 17149 /tmp/orbit-root/linc-794-0-7058d584166d2
unix 2 [ ACC ] STREAM LISTENING 17161 /tmp/orbit-root/linc-792-0-546fe905321cc
unix 2 [ ACC ] STREAM LISTENING 15938 /tmp/orbit-root/linc-74b-0-415135cb6aeab

2. Listing TCP Ports connections

Listing only TCP (Transmission Control Protocol) port connections using netstat -at.

1
2
3
4
5
6
7
8
# netstat -at
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:ipp *:* LISTEN
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 52 192.168.0.2:ssh 192.168.0.1:egs ESTABLISHED
tcp 1 0 192.168.0.2:59292 www.gov.com:http CLOSE_WAIT

3. Listing UDP Ports connections

Listing only UDP (User Datagram Protocol ) port connections using netstat -au.

1
2
3
4
5
6
# netstat -au
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 *:35036 *:*
udp 0 0 *:npmp-local *:*
udp 0 0 *:mdns *:*

4. Listing all LISTENING Connections

Listing all active listening ports connections with netstat -l.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:58642 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
udp 0 0 *:35036 *:*
udp 0 0 *:npmp-local *:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 16972 /tmp/orbit-root/linc-76b-0-6fa08790553d6
unix 2 [ ACC ] STREAM LISTENING 17149 /tmp/orbit-root/linc-794-0-7058d584166d2
unix 2 [ ACC ] STREAM LISTENING 17161 /tmp/orbit-root/linc-792-0-546fe905321cc
unix 2 [ ACC ] STREAM LISTENING 15938 /tmp/orbit-root/linc-74b-0-415135cb6aeab

5. Listing all TCP Listening Ports

Listing all active listening TCP ports by using option netstat -lt.

1
2
3
4
5
6
7
8
9
10
11
12
13
# netstat -lt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:dctp *:* LISTEN
tcp 0 0 *:mysql *:* LISTEN
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:munin *:* LISTEN
tcp 0 0 *:ftp *:* LISTEN
tcp 0 0 localhost.localdomain:ipp *:* LISTEN
tcp 0 0 localhost.localdomain:smtp *:* LISTEN
tcp 0 0 *:http *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 *:https *:* LISTEN

6. Listing all UDP Listening Ports

Listing all active listening UDP ports by using option netstat -lu.

1
2
3
4
5
6
7
8
9
10
11
12
# netstat -lu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 *:39578 *:*
udp 0 0 *:meregister *:*
udp 0 0 *:vpps-qua *:*
udp 0 0 *:openvpn *:*
udp 0 0 *:mdns *:*
udp 0 0 *:sunrpc *:*
udp 0 0 *:ipp *:*
udp 0 0 *:60222 *:*
udp 0 0 *:mdns *:*

7. Listing all UNIX Listening Ports

Listing all active UNIX listening ports using netstat -lx.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# netstat -lx
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 4171 @ISCSIADM_ABSTRACT_NAMESPACE
unix 2 [ ACC ] STREAM LISTENING 5767 /var/run/cups/cups.sock
unix 2 [ ACC ] STREAM LISTENING 7082 @/tmp/fam-root-
unix 2 [ ACC ] STREAM LISTENING 6157 /dev/gpmctl
unix 2 [ ACC ] STREAM LISTENING 6215 @/var/run/hald/dbus-IcefTIUkHm
unix 2 [ ACC ] STREAM LISTENING 6038 /tmp/.font-unix/fs7100
unix 2 [ ACC ] STREAM LISTENING 6175 /var/run/avahi-daemon/socket
unix 2 [ ACC ] STREAM LISTENING 4157 @ISCSID_UIP_ABSTRACT_NAMESPACE
unix 2 [ ACC ] STREAM LISTENING 60835836 /var/lib/mysql/mysql.sock
unix 2 [ ACC ] STREAM LISTENING 4645 /var/run/audispd_events
unix 2 [ ACC ] STREAM LISTENING 5136 /var/run/dbus/system_bus_socket
unix 2 [ ACC ] STREAM LISTENING 6216 @/var/run/hald/dbus-wsUBI30V2I
unix 2 [ ACC ] STREAM LISTENING 5517 /var/run/acpid.socket
unix 2 [ ACC ] STREAM LISTENING 5531 /var/run/pcscd.comm

8. Showing Statistics by Protocol

Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP, and IP protocols. The -s parameter can be used to specify a set of protocols.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# netstat -s
Ip:
2461 total packets received
0 forwarded
0 incoming packets discarded
2431 incoming packets delivered
2049 requests sent out
Icmp:
0 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
1 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 1
Tcp:
159 active connections openings
1 passive connection openings
4 failed connection attempts
0 connection resets received
1 connections established
2191 segments received
1745 segments send out
24 segments retransmited
0 bad segments received.
4 resets sent
Udp:
243 packets received
1 packets to unknown port received.
0 packet receive errors
281 packets sent

9. Showing Statistics by TCP Protocol

Showing statistics of only TCP protocol by using option netstat -st.

1
2
3
4
5
6
7
8
9
10
11
12
# netstat -st
Tcp:
2805201 active connections openings
1597466 passive connection openings
1522484 failed connection attempts
37806 connection resets received
1 connections established
57718706 segments received
64280042 segments send out
3135688 segments retransmited
74 bad segments received.
17580 resets sent

10. Showing Statistics by UDP Protocol

1
2
3
4
5
6
# netstat -su
Udp:
1774823 packets received
901848 packets to unknown port received.
0 packet receive errors
2968722 packets sent

11. Displaying Service name with PID

Displaying service name with their PID number, using option netstat -tp will display “PID/Program Name”.

1
2
3
4
5
# netstat -tp
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.0.2:ssh 192.168.0.1:egs ESTABLISHED 2179/sshd
tcp 1 0 192.168.0.2:59292 www.gov.com:http CLOSE_WAIT 1939/clock-applet

12. Displaying Promiscuous Mode

Displaying Promiscuous mode with -ac switch, netstat print the selected information or refresh screen every five second. Default screen refresh in every second.

1
2
3
4
5
6
7
8
9
10
11
12
13
# netstat -ac 5 | grep tcp
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:58642 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:ipp *:* LISTEN
tcp 0 0 localhost:smtp *:* LISTEN
tcp 1 0 192.168.0.2:59447 www.gov.com:http CLOSE_WAIT
tcp 0 52 192.168.0.2:ssh 192.168.0.1:egs ESTABLISHED
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 localhost:ipp *:* LISTEN
tcp 0 0 localhost:smtp *:* LISTEN
tcp 0 0 *:59482 *:* LISTEN

13. Displaying Kernel IP routing

Display Kernel IP routing table with netstat and route command.

1
2
3
4
5
6
# netstat -r
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.0.0 * 255.255.255.0 U 0 0 0 eth0
link-local * 255.255.0.0 U 0 0 0 eth0
default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0

14. Showing Network Interface Transactions

Showing network interface packet transactions including both transferring and receiving packets with MTU size.

1
2
3
4
5
# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 4459 0 0 0 4057 0 0 0 BMRU
lo 16436 0 8 0 0 0 8 0 0 0 LRU

15. Showing Kernel Interface Table

Showing Kernel interface table, similar to ifconfig command.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# netstat -ie
Kernel Interface table
eth0 Link encap:Ethernet HWaddr 00:0C:29:B4:DA:21
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feb4:da21/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4486 errors:0 dropped:0 overruns:0 frame:0
TX packets:4077 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2720253 (2.5 MiB) TX bytes:1161745 (1.1 MiB)
Interrupt:18 Base address:0x2000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:480 (480.0 b) TX bytes:480 (480.0 b)

16. Displaying IPv4 and IPv6 Information

Displays multicast group membership information for both IPv4 and IPv6.

1
2
3
4
5
6
7
8
9
10
11
# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 all-systems.mcast.net
eth0 1 224.0.0.251
eth0 1 all-systems.mcast.net
lo 1 ff02::1
eth0 1 ff02::202
eth0 1 ff02::1:ffb4:da21
eth0 1 ff02::1

17. Print Netstat Information Continuously

To get netstat information every few second, then use the following command, it will print netstat information continuously, say every few seconds.

1
2
3
4
5
6
7
8
9
10
11
# netstat -c
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 tecmint.com:http sg2nlhg007.shr.prod.s:36944 TIME_WAIT
tcp 0 0 tecmint.com:http sg2nlhg010.shr.prod.s:42110 TIME_WAIT
tcp 0 132 tecmint.com:ssh 115.113.134.3.static-:64662 ESTABLISHED
tcp 0 0 tecmint.com:http crawl-66-249-71-240.g:41166 TIME_WAIT
tcp 0 0 localhost.localdomain:54823 localhost.localdomain:smtp TIME_WAIT
tcp 0 0 localhost.localdomain:54822 localhost.localdomain:smtp TIME_WAIT
tcp 0 0 tecmint.com:http sg2nlhg010.shr.prod.s:42091 TIME_WAIT
tcp 0 0 tecmint.com:http sg2nlhg007.shr.prod.s:36998 TIME_WAIT

18. Finding non supportive Address

Finding un-configured address families with some useful information.

1
2
3
4
5
# netstat --verbose
netstat: no support for `AF IPX' on this system.
netstat: no support for `AF AX25' on this system.
netstat: no support for `AF X25' on this system.
netstat: no support for `AF NETROM' on this system.

19. Finding Listening Programs

Find out how many listening programs running on a port.

1
2
3
4
5
6
7
8
9
10
# netstat -ap | grep http
tcp 0 0 *:http *:* LISTEN 9056/httpd
tcp 0 0 *:https *:* LISTEN 9056/httpd
tcp 0 0 tecmint.com:http sg2nlhg008.shr.prod.s:35248 TIME_WAIT -
tcp 0 0 tecmint.com:http sg2nlhg007.shr.prod.s:57783 TIME_WAIT -
tcp 0 0 tecmint.com:http sg2nlhg007.shr.prod.s:57769 TIME_WAIT -
tcp 0 0 tecmint.com:http sg2nlhg008.shr.prod.s:35270 TIME_WAIT -
tcp 0 0 tecmint.com:http sg2nlhg009.shr.prod.s:41637 TIME_WAIT -
tcp 0 0 tecmint.com:http sg2nlhg009.shr.prod.s:41614 TIME_WAIT -
unix 2 [ ] STREAM CONNECTED 88586726 10394/httpd

20. Displaying RAW Network Statistics

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# netstat --statistics --raw
Ip:
62175683 total packets received
52970 with invalid addresses
0 forwarded
Icmp:
875519 ICMP messages received
destination unreachable: 901671
echo request: 8
echo replies: 16253
IcmpMsg:
InType0: 83
IpExt:
InMcastPkts: 117

That’s it, If you are looking for more information and options about netstat command, refer netstat manual docs or use man netstat command to know all the information. If we’ve missed anything in the list, please inform us using our comment section below. So, we could keep updating this list based on your comments.

Netty Issues

Posted on 2018-04-02 | In Java

too Many duplicated TCP links

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@production-watchdog:~# netstat -an|grep 65003| sort -n | uniq -c
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:11547 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:16857 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:19251 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:21405 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:22151 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:24348 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:25293 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:31504 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:33963 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:37082 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:37454 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:38615 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:40626 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:42882 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:45746 ESTABLISHED
1 tcp6 0 0 10.10.10.10:65003 11.11.11.11:48254 ESTABLISHED

如果存在以下情况,主板断电、重启、弱网环境等,主板无法主动正确关闭已经建立的SOCKET连接,导致服务端出现冗余的连接。
主板断电后再上电,旧的连接已经失效,但是会由于未正确关闭,而发生冗余:

Netty ByteBuf leak

  • Netty.docs: Reference counted objects

redisson EVAL failed in tencent cloud

Posted on 2018-04-02 | In Java

when use tencent cloud Cloud Redis Storage not support EVAL command

  • Log

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    2018-04-01 23:59:19 WARN  [nioEventLoopGroup-10-1] i.n.channel.DefaultChannelPipeline 151 - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
    org.redisson.client.RedisException: ERR unknown command ' EVAL '. channel: [id: 0x126df2e7, L:/172.17.0.4:42414 - R:/172.31.48.16:6379] command: (EVAL), params: [local v = redis.call('hget', KEYS[1], ARGV[1]); redis.call('hdel', KEYS[1], ARGV[1]); return v, 1, redisson_live_object:{223030303030303030303033303030303122}:com.parkbox.domain.DeviceStatus:uniqueId..., PooledUnsafeDirectByteBuf(ridx: 0, widx: 14, cap: 256)]
    at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:243)
    at org.redisson.client.handler.CommandDecoder.decode(CommandDecoder.java:103)
    at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
    at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
    2018-04-01 23:59:21 DEBUG [nioEventLoopGroup-10-1] c.p.protoc
    • 腾讯云暂时不开放redis一些脚本命令

    • Cloud Redis Storage

    • Redisson使用起来很方便,但是需要redis环境支持eval命令 - 沧海一滴 - 博客园

    • 基于Redis实现分布式锁,Redisson使用及源码分析 - 文章 - 伯乐在线
12
markhuyong

markhuyong

15 posts
7 categories
21 tags
© 2018 markhuyong
Powered by Hexo
Theme - NexT.Muse