Exposing SSH of your gitlab on the internet could be dangerous as attackers can get shell access into your server. So here we show you a way to enable SSH for Git without opening access to shell of the hosting OS.
Step 1: Run another SSH instance just for gitlab
Copy sshd config file and make a soft link of sshd binary in /usr/sbin:
In some cases, you might want to recover your data from Zimbra, or you may be curious about how Zimbra manages users and emails behind the scenes.
I dug into Zimbra’s nuts and bolts because we lost Zimbra’s local LDAP, which took more than 3 days to fix, so I wanted to share what I found about this email server with anyone else who might be experiencing the same problem.
Zimbra uses LDAP, MySQL, and file systems to store emails and relate them to users.
Zimbra Uses local LDAP even if you define an external one for authentication. In fact all users in Zimbra have two different id, one id is stored in mysql which is usually starts from 1 and increase sequentially and the other is resides in a local LDAP and the format of id is universally unique identifier (uuid) a randomly generate number.
There is a table in Zimbra’s MySQL called mailbox where database id is mapped to LDAP uuid and usernames are stored in a field called comment.
Now that we understand how MySQL and LDAP are related, let’s examine how meta data of emails is stored in Zimbra.
In Mysql database you’ll find mboxgroup1..mboxgroup100 databases, these are databases that stores metadata according to actuall .msg files stored in /opt/zimbra/store/0/{ID}. In fact these metadata are the ones are show inside webapp or the one shows with IMAP/POP3.
Each user ID in Zimbra is sharded in Mysql, so if a user logs into his mailbox for the first time, Zimbra will record his ID incrementally in Mysql and then divide by 100 to determine which database the user’s ID will be assigned. You can read more about Zimbra database structure here.
Notes for Recovering a Faulty Zimbra:
If you are going to recover a faulty internal LDAP within Zimbra you need to know couple of notes.
Order of users are important, if user id in old Zimbra is 1 then the same user’s id in new Zimbra should be 1 as well
You might not need internal LDAP since it only saves uuid, you can create new users in LDAP using zmprov ca [email protected] password
LDAP database resides in /opt/zimbra/data/ldap/mdb/db/ and its size is 80GB
You need to recover mboxgroup1 to 100 if you really care about your old data, you can get a backup using Mysql tools inside Zimbra similar to a regular Mysql
Also you need to recover the Zimbra database (this database is inside Mysql don’t confuse this name with the actual Zimbra service). You need to copy this database as well. Within Zimbra’s database there is a table called config. Within this table is a field called db.version which keeps track of changes in Zimbra. Make sure this value is the same inside the new Zimbra as it was in the old Zimbra.
Actual Email files are in /opt/zimbra/store/0/{ID}. The {ID} is the same as the id inside mailbox table.
When you want to setup a DNS server on *nix platform, the first option that may cross your mind is bind9. But there are other options such as PowerDNS. In this post I’m going to show you how to setup a DNS server in single node mode. This DNS server is going to be authoritative and forwarder (in case of PowerDNS, recursor mode). Database is going to be mysql and for managing it I’m going to use powerdns-admin using Docker, and all of these, going to be installed on a single node with Ubuntu 18 lts and PowerDNS 4.1.1. This solution is suitable for small to medium sized companies although in this scenario I don’t config a secondary DNS server.
A little bit of theory first, DNS servers have two modes. Authoritative and forwarder. In authoritative mode when a client asks for a domain name the DNS server is responsible to give the IP address, in other words, authoritative DNS servers are the one which own the IP/Domain database.
DNS forwarder task is to redirect requests to other authoritative DNS servers.
In this scenario we want to setup a DNS server for a company to answer local DNS requests as well as redirect external requests to other DNS servers.
First of all update your packages :
sudo apt -y update && sudo apt -y upgrade
Since Ubuntu 18+ comes with a new DNS resolver, this daemon uses udp port 53. Which is going to be used by PowerDNS instead, so we have to stop and disable this service using :
After removing this daemon you’ll not have local DNS forward, for resolving this issue edit the /etc/resol.conf and add following line :
nameserver 8.8.8.8 # you need to set a DNS server
Then we have to prepare Mysql backend :
sudo apt install mysql-server
After installing Mysql if you interested you can make it more secure using following :
sudo mysql_secure_installation
This command will take you to series of questions, such as root password for Mysql or disabling anonymous user. In my case I will config Mysql root password with mysql_native_password mode in order to access Mysql database using password.
Configuring mysql_native_password :
sudo mysql
> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'PASSWORD';
> FLUSH PRIVILEGES;
> exit;
Note : Make sure your server’s date/time is correct. It might not seem important in this case, but it is a best practice to sync date/time
Installing PowerDNS :
sudo apt install pdns-server pdns-backend-mysql
In contrast to older version of PowerDNS, in 4.1.1 version you don’t need to do anything further since the installer will take care of configuring mysql and other configs. In order to make sure everything is working correctly you can check PowerDNS’s mysql setting with :
At the moment you have a DNS server with default authoritative mode installed on port 53 which can only response to DNS requests it knows about, in other words, it doesn’t answer to queries such as google.com since it doesn’t have DNS forwarder.
Since we want this DNS server to be responsible for both modes. The forwarder mode needs to listen to port 53 rather than the authoritative one. We will change default port of authoritative DNS server to something else.
vim /etc/powerdns/pdns.conf
Edit followings:
local-address=127.0.0.1
local-port=5300
Restart PowerDNS
sudo systemctl restart pdns
Make sure PowerDNS listens to port 5300
sudo netstat -nlp | grep 5300
Now installing PowerDNS recursor
sudo apt install pdns-recursor
Edit recursor configs
sudo vim /etc/powerdns/recursor.conf
In order to response to local request, there should be a domain name, in our example I will use example.com
Also if you want to choose a specific external DNS forwarder you can config such as following :
forward-zones=example.com=127.0.01:5300,.=8.8.8.8
And then restart the service
sudo systemctl restart pdns-recursor
now you have a fully functional DNS server which can serve both local and forward requests. But how about managing it? there is a util ships with PowerDNS called pdnsutil but I am not going to use this, rather I will install a GUI based administrating tool called, powerdns-admin. For using this tool I will install it using docker-compose. All I need is powerdns-admin docker-compose file.
In this way the powerdns-admin will start using sqlite which is suffice for my setup. Then I will run the service with following command:
docker-compose up -d
When we run this docker-compose file we can reach it from port 9191 via a browser. At first we need to create a user which is a straightforward task, then we can login to powerdns-admin using the created username/password. But powerdns-admin doesn’t work without API key and API url. We need to enable PowerDNS API and webserver first. These configs belong to authoritative component, edit following file :
Since we are using powerdns-admin in docker mode inside the PowerDNS server, it has to know IP address of PowerDNS internally. We can find docker ip using :
ip r
We are looking for docker IP address which in my case is 172.17.0.1, yours is going to be different. So make sure you’ve got your correct IP first.
Then we will get back to browser, Setting > PDNS
Add your IP address and the API key and done. Now you have a fully functional DNS server with a GUI administrating tool.
For Editing local DNS server you have to go to Dashboard and create the example.com over there and rest is easy.
Just remember don’t edit PowerDNS records directly from mysql unless you know what you are doing, otherwise you’ll get couple of errors and your DNS won’t work properly.
Also in this scenario I didn’t config firewall but you need to config a firewall and allow users only access to specific ports.
Sometimes you want to automate some cumbersome tasks in your Cisco devices, namely I am dealing with an old 3750 core router with OS version 12.x and I don’t want to login to it manually every time I want to change a config or shutdown an interface. Hence I thought I can make use of SSH command to access the device and automate it. But SSH doesn’t help at all due to exec channel issue of Cisco, in fact you can’t send multiple lines of command to your device via SSH command.
After searching a while I figured out that I can use Plink instead of SSH. the Plink belongs to PuTTY project and you can download it from here for windows users, or if you are linux user you can install it via command line.
using Plink, it is easy to communicate with your Cisco devices, One way that I automate some of my tasks is like following :
As you can see I defined a commands101.txt, inside this file I put my Cisco commands.
conf t
interface gi1/0/22
no shutdown
do wr
exit
exit
exit
Breakdown :
The only thing you need to know is, you need to have the public key of your device. The -hostkey is attaching public key to Plink, so Plink works in silent mode and it won’t prompt you to add public key.
If you are working in an enterprise infrastructures, chances are that you are using a centralized authentication system, most likely Active Directory or openLDAP. In this blog I’ll explore how to create a REST API using spring boot to authenticate against openLDAP and create a JWT token in return.
Before getting our hand dirty, we need to review the architecture of spring security and the way we want to utilise it, in a REST API endpoint. According to openLDAP, I’ve explained it’s concept briefly before, you can read more about it here. Also I’ll assume that you know how Spring Boot and JWT works.
Spring Security
In this example I will extend the WebSecurityConfigurerAdapter. This class will assist me to intercept security chain of spring security and insert openLDAP authentication adapter in between.
In fact this abstract class provides convenient methods for configuring spring security configuration using HTTPSecurity object.
First of all I injected three different beans as follows :
This will let us to override default behaviour of spring security authentication. In addition we need to override the configure(HttpSecurity httpSecurity):
@Override
protected void configure(HttpSecurity httpSecurity) throws Exception {
// We don't need CSRF for this example
httpSecurity
.csrf().disable()
.headers()
.frameOptions()
.deny()
.and()
// dont authenticate this particular request
.authorizeRequests().antMatchers("/api/login").permitAll().
// all other requests need to be authenticated
antMatchers("/api/**").authenticated().and().
// make sure we use stateless session; session won't be used to
// store user's state.
exceptionHandling().authenticationEntryPoint(jwtAuthenticationEntryPoint).and().sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS);
// Add a filter to validate the tokens with every request
httpSecurity.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter.class);
}
Also For the sake of manually authenticating a user in /api/login we will expose the authenticationManagerBean() :
After configuring WebSecurityConfig, I’ll provide my customer authentication adapter. This adapter will utilise spring’s LdapTemplate and let us to establish a connection to a LDAP server.
@Component
public class OpenLdapAuthenticationProvider implements AuthenticationProvider {
@Autowired
private LdapContextSource contextSource;
private LdapTemplate ldapTemplate;
@PostConstruct
private void initContext() {
contextSource.setUrl("ldap://1.1.1.1:389/ou=users,dc=www,dc=devcrutch,dc=com");
// I use anonymous binding so, no need to provide bind user/pass
contextSource.setAnonymousReadOnly(true);
contextSource.setUserDn("ou=users,");
contextSource.afterPropertiesSet();
ldapTemplate = new LdapTemplate(contextSource);
}
@Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
Filter filter = new EqualsFilter("uid", authentication.getName());
Boolean authenticate = ldapTemplate.authenticate(LdapUtils.emptyLdapName(), filter.encode(),
authentication.getCredentials().toString());
if (authenticate) {
List<GrantedAuthority> grantedAuthorities = new ArrayList<>();
grantedAuthorities.add(new SimpleGrantedAuthority("ROLE_USER"));
UserDetails userDetails = new User(authentication.getName() ,authentication.getCredentials().toString()
,grantedAuthorities);
Authentication auth = new UsernamePasswordAuthenticationToken(userDetails,
authentication.getCredentials().toString() , grantedAuthorities);
return auth;
} else {
return null;
}
}
@Override
public boolean supports(Class<?> authentication) {
return authentication.equals(UsernamePasswordAuthenticationToken.class);
}
}
Another part that we have to take into consideration is, implementing user login controller. Since we haven’t provided any filter for controlling username and password we ought to implementing it manually as follows :
@RestController
@RequestMapping("/api/login")
public class LoginController {
@Autowired
private AuthenticationManager authenticationManager;
@Autowired
private JwtTokenUtil jwtTokenUtil;
@Autowired
private UserService userService;
@PostMapping
public ResponseEntity<?> createAuthenticationToken(@RequestBody JwtRequest authenticationRequest) throws Exception
{
authenticate(authenticationRequest.getUsername(), authenticationRequest.getPassword());
final User userDetails = userService.loadUserByUsername(authenticationRequest.getUsername());
final String token = jwtTokenUtil.generateToken(userDetails);
return ResponseEntity.ok(new JwtResponse(token));
}
private void authenticate(String username, String password) throws Exception {
try {
authenticationManager.authenticate(new UsernamePasswordAuthenticationToken(username, password));
} catch (DisabledException e) {
throw new Exception("USER_DISABLED", e);
} catch (BadCredentialsException e) {
throw new Exception("INVALID_CREDENTIALS", e);
}
}
}
These are the pillars of having a REST API + JWT + LDAP back-end using spring boot.
Now we can test the API using a REST client.
After getting the JWT token we can call authorized endpoints
In fact you can’t do it without knowing DN! There is an anonymous access in openLDAP which is enabled by default. The anonymous access let one to query(search filter) openLDAP without knowing bind username/password.
Run following command on your openLDAP server :
ldapwhoami -H ldap:// -x
If you get “anonymous” as result you are all set and your openLDAP is supporting anonymous query, otherwise this blog is not the one you are looking for!
So What’s the Deal?
Assume you know UID of the user in ldap directory but not his DN and assume the root directory hierarchy is : dc=devcrutch,dc=com . How do you want to get CN and then DN of such user in LDAP just by using UID? you might think of such query to get data out of openLDAP
uid=USERNAME,dc=devcrutch,dc=com
If you run this query, it’ll get you to nowhere.
Here Comes the Anonymous Query
First of all you have to query the whole directory for finding such user. You have to create a query similar to following:
(&(objectClass=*)(uid=USERNAME))
This query will search the entire Directory Information Tree (DIT) for such user name.
With such query you can get the DN. Using DN and password you can authenticate against LDAP.
Well this was the idea, applying a search filter using USERNAME via anonymous identity then find the DN and finally login using the retrieved DN.
Time for Some java
Now we have the rough idea it’s time to implement it in java. For finding DN you need to query the entire LDAP directory (note: in real world searching the entire directory is not a good idea, you have to narrow your query, otherwise your query might consume all the server’s resources)
In Spring Boot there is an annotation @Async to assist developers for developing concurrent applications. But using this feature is quite tricky. In this blog we will see how to use this feature along with CompletableFuture. I assumed you know the drill about CompletableFuture, so I won’t repeat the concept here.
First of all you need to annotate your application class with @EnableAsync, this annotation tells the Spring to look for methods that are annotated with @Async and run them in a separate executor.
@SpringBootApplication
@EnableAsync
public class App {
RestTemplate
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
If you take a look at Spring Boot example about @Async using CompletableFuture you’ll notice the way they’re using this feature is based on a REST request, in my opinion, I beleive, it’s kinda limited, it doesn’t give you a clue of how to use such feature in other situation. For an instance if you have a long running task what would you do about it?
// Source : https://spring.io/guides/gs/async-method/
package hello;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.web.client.RestTemplateBuilder;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
import java.util.concurrent.CompletableFuture;
@Service
public class GitHubLookupService {
private static final Logger logger = LoggerFactory.getLogger(GitHubLookupService.class);
private final RestTemplate restTemplate;
public GitHubLookupService(RestTemplateBuilder restTemplateBuilder) {
this.restTemplate = restTemplateBuilder.build();
}
@Async
public CompletableFuture<User> findUser(String user) throws InterruptedException {
logger.info("Looking up " + user);
String url = String.format("https://api.github.com/users/%s", user);
User results = restTemplate.getForObject(url, User.class);
// Artificial delay of 1s for demonstration purposes
Thread.sleep(1000L);
return CompletableFuture.completedFuture(results);
}
}
In FindUser(String user), it uses a synthetic delay in the main thread also the main task of this method is fetching data from github using RestTemplate, this class is a “Synchronous client to perform HTTP requests”. How about using a long running task such as calling a network function, like ping a server from your REST endpoint? In that case you need to tailor the CompletableFuture. You can’t simply call following line and carry on.
For using @Async in your code, your method has to return Future or CompletableFuture for more information you can refer to its document. Take a look at following example :
In this example I override the get() method and return the CompletableFuture without any thread executor, in fact with this method we ask Spring to execute the @Async method in a different thread, but we don’t provide any thread executor, only body of a background-worker will suffice.
P.S : In this example I decided to use a network function inside Spring Boot just for the sake of argument. But it’s better to not to use network functions directly in a REST endpoint. Specially when you expect to get an immediate result out of it. The reason: network functions are blocking which means, if you call this REST endpoint. You’ll have to wait to get the result from the endpoint. It’s highly advised to use other methods such as queue or push method(e.g. websocket) for calling blocking functions.
OpenLDAP installation is fairly straight-forward and doesn’t have any caveats, but making it replicable has ambiguity. We will start with installing openLDAP. I will use following configs :
ubuntu 16.04 server
openLDAP 2.4.x
phpLDAPadmin
Installing openLDAP :
First thing first, update your ubuntu box :
sudo apt-get update
Install openLDAP :
sudo apt-get install slapd ldap-utils
During installation process you will prompted to enter administrator password. After installing the ldap server you need to configure it :
sudo dpkg-reconfigure slapd
You will see a basic gui with couple of prompts of how to configuring your openLDAP here is my config:
Omit openLDAP Server Configuration : No
DNS Domain : your domain in my case, lab.devcrutch.com
Organization Name : whatever you fancy, lab
Database : MDB (It’s an in-memory database based on BerkeleyDB. In case you were curious)
Remove Database when openLDAP is Removed : No
Move Old Database : Yes
Allow LDAP2 : No
That’s it, if you ever want to check status of your openLDAP :
This command will prompt you to enter your password and if you enter it correctly you will get following response :
dn:cn=admin,dc=lab,dc=devcrutch,dc=com
You are all set to use openLDAP. Now let’s add an user for replication purposes inside provider (master) node. The replication user only needs to have a password and an OU, run following commands to add repl user with only a password
Save the above file in an LDIF file and run following command. I will call this file add_repl.ldif
ldapadd -Y EXTERNAL -H ldapi:// -f add_repl.ldif
This user needs to have a privilege to only read couple of items from directory, the most important items to read, is userPassword, cn, uid and shadowLastChange. But before granting such access there is an issue with openLDAP’s configs that shipped by Ubuntu 16.04. It is best to remove those configs using following command :
#Run ldapsearch -Y EXTERNAL -H ldapi:// -b "cn=config"
#Not approapriate configs for making your openLDAP replicable
olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none
olcAccess: {1}to attrs=shadowLastChange by self write by * read
olcAccess: {2}to * by * read
For deleting them run:
ldapmodify -Y EXTERNAL -H ldapi://
In the prompt write following lines one by one (this way you will delete them step by step for the sake of not getting any error)
dn: olcDatabase={1}mdb,cn=config
changetype: modify
delete: olcAccess
olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none
#press enter
dn: olcDatabase={1}mdb,cn=config
changetype: modify
delete: olcAccess
olcAccess: {0}to attrs=shadowLastChange by self write by * read
#press enter
dn: olcDatabase={1}mdb,cn=config
changetype: modify
delete: olcAccess
olcAccess: {0}to * by * read
#Press ctrl-d at the end
And add following configs :
#Execute ldapmodify -Y EXTERNAL -H ldapi://
#Then write these configs in it, at end press ctrl-d
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn="cn=admin,dc=lab,dc=devcrutch,dc=com" write by dn="cn=repl,dc=lab,dc=devcrutch,dc=com" read by * none
#press enter
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {1}to dn.base="" by * read
#press enter
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {2}to * by self write by dn="cn=admin,dc=lab,dc=devcrutch,dc=com" write by * read
Now your provider is ready. We will go to consumer server. First of all install openLDAP using mentioned configs, it should be the same as the master. At the end add following configs into your consumer’s openLDAP :
The reason I separate configs is because of stupidity level of ldap tools. If you happen to have one of these configs inside your openLDAP previously, the ldapmodify nags about it and will kick you out without knowing which config is saved and which one isn’t, so the best way for me was saving them sequentially.
Reason for having another user rather the “cn=admin” was because of security, if you take a closer look at the latter config you will see that you have to add your password as a plain text. So it’s best to not to reveal your admin’s password. The repl user is a readonly user.
At the end you can install phpOpenLDAP in provider and consumer :
Note: In this tutorial I’ve tried to create a replication server, replication doesn’t mean you have availability, which means if your master(provider) server is down then your client querys the slave(consumer) server. Replication means consistency not availability. If you happen to want availability you need to config openLDAP in multi-master mode.
Iptables is a software firewall based on Netfilter, in fact it’s a framework for working with Netfilter. Generally firewalls have two modes, stateless and stateful. In this post we will study a brief of how to configure Netfilter in stateful mode.
I’m going to assume your linux box is fresh installation and doesn’t have any rules on it. You can check your iptables rules by typing following command :
sudo iptables -nvL -t filter
Breakdown:
-L : Shows list of rules
-t filter : t stands for table. The table we want to work with is calling filter, eventhough it’s the default table but I’d rather to mention it
-n : Avoid long reserve DNS and only shows IP addresses
-v : Verbose
Next write following commands :
sudo iptables -A INPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp -m state --state NEW -j ACCEPT
Rule of Thumb: The order of writing rules matters. You have to take take into account that Netfilter’s rules are checked sequentially and route of a packet is determined on the first match.
The first rule says if the packet is ESTABLISHED or is RELATED to another packet(e.g. ICMP error messages) then it can pass through. If the packet is completely NEW to Netfilter, it skips the first rule and try to match the packet with the second rule. Since ESTABLISHED and RELATED are more frequent, this helps iptables to perform faster by reducing number of rules to check.
Reason: When a client sends a packet to a server, it actually sends a SYN to server. Client’s packet enters into NEW state in Netfilter.
Then server sends a SYN+ACK back to the client, and now it’s client turn to send ACK to the server again. The client is in ESTABLISHED state after sending the ACK.
BreakDown:
-A : Append the rule to following chain, in this case INPUT chain
-p : Protocol (In this case TCP)
-m : Which module we want to use. For making Netfilter stateful we will use state module
–state : Identify the state of packet. This argument comes after the -m state
-j : What action Netfilter has to do with the packet ACCEPT / DROP or REJECT
Note: module state is deprecated and you can use conntack module instead, but according to this poststate module is valid yet and no need to be worry about it.
In this post I dived into Netfilter stateful packet filtering and tried to reason why ones need to write rules in such order, of course there are so many stones remained unturned. Hopefully I will write more about Iptables/Netfilter.