Thursday, June 3, 2021

Survive Ransomware Attack using backups

 

A 3-2-1 Backup strategy for Business critical data to sustain Ransomware attacks:

 

Three (3) copies of the data (could be recent) which is stored across two (2) different storage mediums/locations and one (1) cloud storage provider (encrypted data).

 

If one of the data backups becomes encrypted from a ransomware attack, we will have the ability to recover from a different source, provided a backup is present across different locations.

 

Ransomware threats can target any local backup on the network which include such as local shadow copies or other network-attached storages, as a result any network resources a user has access to will become encrypted. To prevent this we should follow strict air-gap policies, such as:

·         taking media offline as quickly as possible by physically disconnecting after backup operation

·         maintaining up-to-date malware detection tools are essential

·         system patching

 

Using air-gapped, off-site media is best practice, as said below we can use immutable storage like write once read many (WORM) media such as optical disks, flash storage or tape configured as WORM. AWS/Azure and few cloud providers offer WORM-format cloud storage. We also need to be prepared for:

·         the time to restore systems

·         prioritize systems for recovery

·         clean networks for recovery purposes

·         The frequency of backups also matters.

o   How often does the data change?

o   How much does this impact business if the backup data is not current?

 

And finally securing the end point:

·         network policies and protection using antivirus, antispyware/antimalware, firewall etc.

·         limit execution of unapproved programs on workstations

·         limit the write capabilities of end users so that, even if they download and run a ransomware application, it is unable to encrypt files beyond the user’s specific files

·         file reputation scoring systems (Symantec)

Thursday, January 16, 2020

Liveness/Readiness pobe

Your company just finished releasing a candy-themed mobile game. So far, things are going well, and the back end services running in your Kubernetes cluster are servicing thousands of requests. However, there have been a few issues with the back end service.

Container Health Issues

The first issue is caused by application instances entering an unhealthy state and responding to user requests with error messages. Unfortunately, this state does not cause the container to stop, so the Kubernetes cluster is not able to detect this state and restart the container. Luckily, the application has an internal endpoint that can be used to detect whether or not it is healthy. This endpoint is /healthz on port 8081. Your first task will be to create a probe to check this endpoint periodically. If the endpoint returns an error or fails to respond, the probe will detect this and the cluster will restart the container.

Container Startup Issues

Another issue is caused by new pods when they are starting up. The application takes a few seconds after startup before it is ready to service requests. As a result, some users are getting error message during this brief time. To fix this, you will need to create another probe. To detect whether the application is ready, the probe should simply make a request to the root endpoint, /, on port 80. If this request succeeds, then the application is ready.

There is already a pod descriptor in the home directory: ~/candy-service-pod.yml. Edit this file to add the probes, then create the pod in the cluster to test it.

Monday, February 18, 2019

Docker for Springboot

Docker is a container management service. The keywords of Docker are develop, ship and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.

To dockerize an App , we'd need to create a Dockerfile.
Follow below steps to create a Dockerfile for Dockerizing a Springboot App -

Create a file named - Dockerfile and copy below content

note : f is always small in Dockerfile


FROM openjdk:8-jdk-alpine
RUN mkdir -p /tmp
WORKDIR /tmp
ADD /target/sample-app.jar sample-app.jar

ENTRYPOINT ["java",  "-jar" , "/tmp/sample-app.jar" ]

If you have jar pushed to a remote artifactory you can download it using wget.
Use below Dockerfile for remote repo


FROM openjdk:8-jdk-alpine
RUN mkdir -p /tmp
WORKDIR /tmp
wget <url of repo>/sample-app.jar
ENTRYPOINT ["java",  "-jar" , "/tmp/sample-app.jar" ]


Build the docker image
docker build -t sample-docker-build:0.1  .


To Run the Docker image 
docker run  -p 9001:8080sample-docker-build:0.1

  •  8080 is the default port on which springboot app runs
  •  9001 is the port exposed for it, so if a user has to access the app it would be 9001.
  •  You can expose the same port at which the app is running



Wednesday, September 26, 2018

How to execute Storedproc along side JPA - Springboot 2.0





Define Data sources in application.yaml

app:  datasource:     oracle-local:        url: jdbc:oracle:thin:@localhost:10152/test
        username: wrkday_apps
        password: bread4all
        driver-class: oracle.jdbc.driver.OracleDriver
     postgres-local:        url: jdbc:postgresql://localhost:5432/
        username: postgres
        password: postgres
        driver-class: org.postgresql.Driver



crate a config file


@Configurationpublic class DatasourceConfig {

    @Bean    @Primary    @ConfigurationProperties("app.datasource.oracle-local")
    public DataSourceProperties oracleDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Bean    @Primary    @ConfigurationProperties("app.datasource.oracle-local")
    public DataSource oracleDataSource() {
        return oracleDataSourceProperties().initializeDataSourceBuilder().build();
    }


    @Bean(name = "oracleEntityManagerFactory")
    @Primary    public LocalContainerEntityManagerFactoryBean oracleEntityManagerFactory(
            EntityManagerFactoryBuilder builder) {
        return builder
                .dataSource(oracleDataSource())
                .packages(Audit.class, Audit1.class)
                .persistenceUnit("local1")
                .build();
    }


    @Configuration    @EnableJpaRepositories(basePackages= "com.jlabs.repo",
            entityManagerFactoryRef = "oracleEntityManagerFactory")
    public class oracleConfiguration {
    }




    @Bean    @ConfigurationProperties("app.datasource.postgres-local")
    public DataSourceProperties localDataSourceProperties() {
        return new DataSourceProperties();
    }

    @Bean(name = "localDataSource")
    @ConfigurationProperties("app.datasource.postgres-local")
    public DataSource localDataSource() {
        return localDataSourceProperties().initializeDataSourceBuilder().build();
    }


    @Bean(name = "jdbcMaster")
    public SimpleJdbcCall masterJdbcTemplate() {
        return new SimpleJdbcCall( localDataSource()) .withSchemaName("public")
                .withFunctionName("test_pkg");
    }
}


to Execute the store proc

@Componentpublic class LocalTestDAO {

    private static Logger LOGGER = LoggerFactory.getLogger(LocalTestDAO.class);

    @Autowired    public SimpleJdbcCall jdbcMaster;

    @Transactional    public Integer savetoLocalTestPkg() {
        LOGGER.info("Saving to TestPkg {}" );
        try {


            final Map<String, Object> params = new HashMap<>();
            params.put("_sno", 8);
            params.put("_eid", 104);
            params.put("_ename", "jim");
            params.put("_sd", new Date());
            params.put("_ed", new Date());
            params.put("_sid", 150000);
            Map<String, Object> out = jdbcMaster.execute(params);


            return 1;
        } catch (Exception e) {
            LOGGER.error("Saving to EDW TestPkg failed {}", e);
            return -1;
        }
    }

}



Steps to create the stored proc

CREATE TABLE app_for_leave
(
  sno INTEGER NOT NULL,
  eid INTEGER,
  ename VARCHAR(20),
  sd DATE,
  ed DATE,
  sid INTEGER,
  status BOOLEAN DEFAULT FALSE,
  CONSTRAINT pk_snoa PRIMARY KEY (sno)
);


CREATE OR REPLACE FUNCTION insert_into_table(_sno INTEGER, _eid INTEGER, _ename VARCHAR(20), _sd DATE, _ed DATE, _sid INTEGER)
RETURNS void AS
$BODY$
BEGIN
 INSERT INTO app_for_leave(sno, eid, ename, sd, ed, sid)
  VALUES(_sno, _eid, _ename, _sd, _ed, _sid);
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
 COST 100;

To manually execure the proc - 
SELECT * FROM insert_into_table(2,102,'Jimmy','2013-08-16','2013-08-17',12);

Tuesday, July 10, 2018

Enable Auto start - Springboot Dev Tool - Intelij

To Enable Auto Restart for Springboot App anytime something is changed on the class path

Add below dependency to pom:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>



Set your Intelij build settings to build project automatically




Applications that use spring-boot-devtools automatically restart whenever files on the classpath change. This can be a useful feature when working in an IDE, as it gives a very fast feedback loop for code changes. By default, any entry on the classpath that points to a folder is monitored for changes. Note that certain resources, such as static assets and view templates, do not need to restart the application.


Enable automatic compilation
Get an improved JRebel development experience by enabling automatic compilation:
  1. Access Settings (Preferences on macOS).
  2. Select Build, Execution, Deployment > Compiler. Enable Build project automatically.
  3. Select Appearance & Behavior > System Settings. Enable Save files automatically if application is idle for. We recommend setting it to 10 seconds. (This is already enabled by default on IntelliJ 2018.1 and earlier.)
  4. Press OK.
  5. Press Ctrl+Shift+A (Cmd+Shift+A on macOS) and search for Registry. Open it to find and enable compiler.automake.allow.when.app.running (IntelliJ IDEA 15 and newer).


Friday, May 11, 2018

Kubernetes

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers kubectl syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation. For installation instructions see installing kubectl.


To create deployment app -
kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

To Get all Deploymnets :
kubectl get deployments


 A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources
When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of failures identical Pods are scheduled on other available Nodes.
kubectl get pods
kubectl desribe pods




Thursday, May 3, 2018

Interface Vs Abstract Class

1. In Java you can only extend one class but implement multiple interface. So if you extend a class you lost your chance of extending another class.

2. Interface are used to represent adjective or behavior e.g. RunnableClonableSerializable etc, so if you use an abstract class to represent behavior your class can not be Runnable and Clonable at same time because you can not extend two class in Java but if you use interface your class can have multiple behavior at same time.

3. On time critical application prefer abstract class is slightly faster than interface.

4. If there is a genuine common behavior across the inheritance hierarchy which can be coded better at one place than abstract class is preferred choice. Some time interface and abstract class can work together also where defining function in interface and default functionality on abstract class.


class SmartPhone
  brand: String
  year
  model
 message():abstract
calling():abstract
--------------------------------------------------

inteface Camera
interface Music  
interface AppleWatch

Iphone extends SmartPhone implements Camera, Music, AppleWatch
Android extends SmartPhone implements Camera, Music
windows extends SmartPhone implements Camera, Music