Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

Tuesday, October 9, 2018

Gulp build task failing inside docker

Leave a Comment

I have a simple Hapi.js Node API. Since I have used TypeScript to write the API, I wrote Gulp task for transpiling the code. My API works fine if I run it directly in my main machine but I get the following error when I try to run it inside Docker:

Error: enter image description here

Docker compose command:

docker-compose -f docker-compose.dev.yml up -d --build 

Here is my code: ./gulpfile:

'use strict';  const gulp = require('gulp'); const rimraf = require('gulp-rimraf'); const tslint = require('gulp-tslint'); const mocha = require('gulp-mocha'); const shell = require('gulp-shell'); const env = require('gulp-env');  /**  * Remove build directory.  */ gulp.task('clean', function () {   return gulp.src(outDir, { read: false })     .pipe(rimraf()); });  /**  * Lint all custom TypeScript files.  */ gulp.task('tslint', () => {   return gulp.src('src/**/*.ts')     .pipe(tslint({       formatter: 'prose'     }))     .pipe(tslint.report()); });  /**  * Compile TypeScript.  */  function compileTS(args, cb) {   return exec(tscCmd + args, (err, stdout, stderr) => {     console.log(stdout);      if (stderr) {       console.log(stderr);     }     cb(err);   }); }  gulp.task('compile', shell.task([   'npm run tsc', ]))  /**  * Watch for changes in TypeScript  */ gulp.task('watch', shell.task([   'npm run tsc-watch', ])) /**  * Copy config files  */ gulp.task('configs', (cb) => {   return gulp.src("src/configurations/*.json")     .pipe(gulp.dest('./build/src/configurations')); });  /**  * Build the project.  */ gulp.task('build', ['tslint', 'compile', 'configs'], () => {   console.log('Building the project ...'); });  /**  * Run tests.  */ gulp.task('test', ['build'], (cb) => {   const envs = env.set({     NODE_ENV: 'test'   });    gulp.src(['build/test/**/*.js'])     .pipe(envs)     .pipe(mocha({ exit: true }))     .once('error', (error) => {       console.log(error);       process.exit(1);     }); });  gulp.task('default', ['build']); 

./.docker/dev.dockerfile:

FROM node:latest  LABEL author="Saurabh Palatkar"  # create a specific user to run this container # RUN adduser -S -D user-app  # add files to container ADD . /app  # specify the working directory WORKDIR app RUN chmod -R 777 . RUN npm i gulp --g # build process RUN npm install # RUN ln -s /usr/bin/nodejs /usr/bin/node RUN npm run build # RUN npm prune --production EXPOSE 8080 # run application CMD ["npm", "start"] 

./docker-compose.dev.yml:

version: "3.4"  services:   api:     image: node-api     build:       context: .       dockerfile: .docker/dev.dockerfile     environment:       PORT: 8080       MONGO_URL: mongodb:27017       NODE_ENV: development     ports:       - "8080:8080"     links:       - database    database:     image: mongo:latest     ports:       - "27017:27017" 

What I am missing here?

1 Answers

Answers 1

Each Dockerfile's command is executed in a separated subcontainer, so RUN npm run build can't find the gulp executable. Try to edit you Dockerfile to execute npm-related commands in the same subcontainer:

RUN npm i gulp --g && npm install && ln -s /usr/bin/nodejs /usr/bin/node && npm prune --production 

Maybe you also need to copy app files into the container.

Try adding:

COPY . . 

just before CMD ["npm", "start"] in your Dockerfile

Read More

Monday, September 24, 2018

Speeding up Go builds with go 1.10 build cache in Docker containers

Leave a Comment

I have a Go project with a large vendor/ directory which almost never changes.

I am trying to use the new go 1.10 build cache feature to speed up my builds in Docker engine locally.

Avoiding recompilation of my vendor/ directory would be enough optimization. So I'm trying to do Go equivalent of this common Dockerfile pattern for Python:

FROM python COPY requirements.txt .              # <-- copy your dependency list RUN pip install -r requirements.txt  # <-- install dependencies COPY ./src ...                       # <-- your actual code (everything above is cached) 

Similarly I tried:

FROM golang:1.10-alpine COPY ./vendor ./src/myproject/vendor RUN go build -v myproject/vendor/... # <-- pre-build & cache "vendor/" COPY . ./src/myproject 

However this is giving "cannot find package" error (likely because you cannot build stuff in vendor/ directly normally either).

Has anyone been able to figure this out?

2 Answers

Answers 1

Here's something that works for me:

FROM golang:1.10-alpine WORKDIR /usr/local/go/src/github.com/myorg/myproject/ COPY vendor vendor RUN find vendor -maxdepth 2 -mindepth 2 -type d -exec sh -c 'go install -i github.com/myorg/myproject/{}/... || true' \; COPY main.go . RUN go build main.go 

It makes sure the vendored libraries are installed first. As long as you don't change a library, you're good.

Answers 2

Just use go install -i ./vendor/....

Consider the following Dockerfile:

FROM    golang:1.10-alpine  ARG     APP ENV     PTH $GOPATH/src/$APP WORKDIR $PTH  # Pre-compile vendors. COPY    vendor/ $PTH/vendor/ RUN     go install -i ./vendor/...  ADD     . $PTH/  # Display time taken and the list of the packages being compiled. RUN     time go build -v 

You can test it doing something like:

docker build -t test --build-arg APP=$(go list .) . 

On the project I am working on, without pre-compile, it takes ~12sec with 90+ package each time, after, it take ~1.2s with only 3 (only the local ones).

If you still have "cannot find package", it means there are missing vendors. Re-run dep ensure should fix it.

An other tip, unrelated to Go is to have your .dockerignore start with *. i.e. ignore everything and then whitelist what you need.

Read More

Sunday, September 23, 2018

How to execute sails js in docker

Leave a Comment

I am using docker for my sails js project. Any idea how to check which command was used to run sails.js. I have tried History command but its not giving me previous command which was run

Can anybody tell me how to check the which command was used before to execute the sailsjs? I need to restart my sails.js

1 Answers

Answers 1

To check what command used to run to make it started in docker, try

docker ps -a --no-trunc --format "{{.ID}}: {{.Command}}"

It will show the full command for sailsjs for the container.

Read More

Tuesday, September 11, 2018

Unable to run cygwin in Windows Docker Container

Leave a Comment

I've been working with Docker for Windows, attempting to create a Windows Container that can run cygwin as the shell within the container itself. I haven't had any luck getting this going yet. Here's the Dockerfile that I've been messing with.

# escape=` FROM microsoft/windowsservercore  SHELL ["powershell", "-command"]  RUN Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression RUN choco install cygwin -y RUN refreshenv RUN [Environment]::SetEnvironmentVariable('Path', $env:Path + ';C:\tools\cygwin\bin', [EnvironmentVariableTarget]::Machine) 

I've tried setting the ENTRYPOINT and CMD to try and get into cygwin, but neither seems to do anything. I've also attached to the container with docker run -it and fired off the cygwin command to get into the shell, but it doesn't appear to do anything. I don't get an error, it just returns to the command prompt as if nothing happened.

Is it possible to run another shell in the Windows Container, or am I just doing something incorrectly?

Thanks!

2 Answers

Answers 1

You don't "attach" to a container with docker run: you start a container with it.

In your case, as seen here, docker run -it is the right approach.

You can try as an entry point using c:\cygwin\bin\bash, as seen in this issue.

As commented in issue 32330:

Don't get me wrong, cygwin should work in Docker Windows containers.

But, it's also a little paradoxical that containers were painstakingly wrought into Windows, modeled on containers on Linux, only for people to then want to run Linux-utils in these newly minted Docker Windows containers...

That same issue is still unresolved, with new case seen in May and June 2018:

We have an environment that compiles with Visual Studio but still we want to use git and some very useful commands taken from linux.
Also we use of-the-shelve utilities (e.g. git-repo) that uses linux commands (e.g. curl, grep,...)

Some builds require Cygwin like ICU (a cross-platform Unicode based globalization library), and worst: our builds require building it from source.


You can see an example of a crash in MSYS2-packages issue 1239:

Step 5/5 : RUN "C:\\msys64\\usr\\bin\\ls.exe"  ---> Running in 5d7867a1f8da The command 'cmd /S /C "C:\\msys64\\usr\\bin\\ls.exe"' returned a non-zero code: 3221225794 

This can get more information on the crash:

PS C:\msys64\usr\bin>    Get-EventLog -Index 28,29,30 -LogName "Application" | Format-List -Property * 

The workaround was:

PS > xcopy /S C:\Git C:\Git_Copy PS > C:\Git_Copy\usr\bin\sh.exe --version > v.txt PS > type v.txt 

As mentioned in that thread, the output gets lost somewhere in the container, thus sending it to a text file.

Answers 2

Mainly the solution is to use winpty ahead of mintty

Read More

Monday, September 10, 2018

Docker, Flask, SQLAlchemy: ValueError: invalid literal for int() with base 10: 'None'

Leave a Comment

I have a flask app that can be initialized successfully and connects to Postgresql database. However, when i try to dockerize this app, i get the below error message. "SQLALCHEMY_DATABASE_URI" is correct and i can connect to it, so i can't figure where I have gone wrong.

docker-compose logs

app_1       |   File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 60, in __init__ app_1       |     self.port = int(port) app_1       | ValueError: invalid literal for int() with base 10: 'None' 

Postgres database connects successfully in Docker container

postgres_1  | LOG:  database system is ready to accept connections 

config.py

from os import environ import os  RDS_USERNAME = environ.get('RDS_USERNAME') RDS_PASSWORD = environ.get('RDS_PASSWORD') RDS_HOSTNAME = environ.get('RDS_HOSTNAME') RDS_PORT = environ.get('RDS_PORT') RDS_DB_NAME = environ.get('RDS_DB_NAME')  SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://{username}:{password}@{hostname}:{port}/{dbname}"\                           .format(username = RDS_USERNAME, password = RDS_PASSWORD, \                            hostname = RDS_HOSTNAME, port = RDS_PORT, dbname = RDS_DB_NAME) 

flask_app.py (entry point)

def create_app():     app = Flask(__name__, static_folder="./static", template_folder="./static")     app.config.from_pyfile('./app/config.py', silent=True)      register_blueprint(app)     register_extension(app)      with app.app_context():         print(db) -> This prints the correct path for SQLALCHEMY_DATABASE_URI         db.create_all()         db.session.commit()     return app  def register_blueprint(app):     app.register_blueprint(view_blueprint)     app.register_blueprint(race_blueprint)   def register_extension(app):     db.init_app(app)     migrate.init_app(app)   app = create_app()  if __name__ == '__main__':     app.run(host='0.0.0.0', port=8080, debug=True) 

Dockerfile

FROM ubuntu  RUN apt-get update && apt-get -y upgrade  RUN apt-get install -y python-pip && pip install --upgrade pip  RUN mkdir /home/ubuntu  WORKDIR /home/ubuntu/celery-scheduler  ADD requirements.txt /home/ubuntu/celery-scheduler/  RUN pip install -r requirements.txt  COPY . /home/ubuntu/celery-scheduler  EXPOSE 5000  CMD ["python", "flask_app.py", "--host", "0.0.0.0"] 

docker-compose.yml

version: '2'   services:   app:     restart: always     build:        context: .       dockerfile: Dockerfile     volumes:       - .:/app     depends_on:       - postgres    postgres:     restart: always       image: postgres:9.6     environment:       - POSTGRES_USER=${RDS_USERNAME}       - POSTGRES_PASSWORD=${RDS_PASSWORD}       - POSTGRES_HOSTNAME=${RDS_HOSTNAME}       - POSTGRES_DB=${RDS_DB_NAME}     ports:       - "5432:5432" 

1 Answers

Answers 1

You need to set environment variables RDS_USERNAME, RDS_PASSWORD, RDS_HOSTNAME, RDS_PORT , and RDS_DB_NAME in Dockerfile with ENV key value, for example

ENV RDS_PORT 5432 
Read More

How to create a solr core using docker-solrs image extension mechanism?

Leave a Comment

I would like to create a docker image of solr that creates a core on startup. Therefore I'm using the docker-entrypoint-initdb.d extension mechanism described for solr docker containers. The documentation says

The third way of creating a core at startup is to use the image extension mechanism explained in the next section.

But it does not explain exactly how to achieve this.

The Dockerfile I'm using is:

FROM solr:6.6  USER root  RUN mkdir /A12Core && chown -R solr:solr /A12Core  COPY --chown=solr:solr ./services-core/search/A12Core /A12Core/ COPY --chown=solr:solr ./create-a12core.sh /docker-entrypoint-initdb.d/  USER solr  RUN chmod -R a+X /A12Core 

The folder A12Core contains the solr config files for the core. And the script create-a12core.sh to create the core is:

#!/bin/bash  solr-precreate A12Core /A12Core 

The /A12Core dir contains the following files:

./core.properties ./conf ./conf/update-script.js ./conf/mapping-ISOLatin1Accent.txt ./conf/schema.xml ./conf/spellings.txt ./conf/solrconfig.xml ./conf/currency.xml ./conf/mapping-FoldToASCII.txt ./conf/_schema_analysis_stopwords_english.json ./conf/stopwords.txt ./conf/synonyms.txt ./conf/elevate.xml ./conf/lang ./conf/lang/stopwords_en.txt ./conf/lang/stopwords_de.txt 

However when starting an image build with the above Dockerfile and script an infinite loop seems to be created. The output is:

/opt/docker-solr/scripts/solr-foreground: running /docker-entrypoint-initdb.d/create-a12core.sh Executing /opt/docker-solr/scripts/solr-precreate A12Core /A12Core /opt/docker-solr/scripts/solr-precreate: running /docker-entrypoint-initdb.d/create-a12core.sh Executing /opt/docker-solr/scripts/solr-precreate A12Core /A12Core /opt/docker-solr/scripts/solr-precreate: running /docker-entrypoint-initdb.d/create-a12core.sh Executing /opt/docker-solr/scripts/solr-precreate A12Core /A12Core /opt/docker-solr/scripts/solr-precreate: running /docker-entrypoint-initdb.d/create-a12core.sh ... 

How do I create a core using the docker-entrypoint-initdb.d extension mechanism?

1 Answers

Answers 1

Provide precreate-core file location which is to be executed, so edit create-a12core.sh as given below

 #!/bin/bash  /opt/docker-solr/scripts/precreate-core  A12Core /A12Core 

Tested and Works !!!

Read More

Friday, August 31, 2018

Docker-compose check if mysql connection is ready

Leave a Comment

I am trying to make sure that my app container does not run migrations / start until the db container is started and READY TO accept connections.

So I decided to use the healthcheck and depends on option in docker compose file v2.

In the app, I have the following

app:     ...     depends_on:       db:       condition: service_healthy 

The db on the other hand has the following healthcheck

db:   ...   healthcheck:     test: TEST_GOES_HERE     timeout: 20s     retries: 10 

I have tried a couple of approaches like :

  1. making sure the db DIR is created test: ["CMD", "test -f var/lib/mysql/db"]
  2. Getting the mysql version: test: ["CMD", "echo 'SELECT version();'| mysql"]
  3. Ping the admin (marks the db container as healthy but does not seem to be a valid test) test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]

Does anyone have a solution to this?

4 Answers

Answers 1

version: "2.1" services:     api:         build: .         container_name: api         ports:             - "8080:8080"         depends_on:             db:                 condition: service_healthy     db:         container_name: db         image: mysql         ports:             - "3306"         environment:             MYSQL_ALLOW_EMPTY_PASSWORD: "yes"             MYSQL_USER: "user"             MYSQL_PASSWORD: "password"             MYSQL_DATABASE: "database"         healthcheck:             test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]             timeout: 20s             retries: 10 

The api container will not start until the db container is healthy (basically until mysqladmin is up and accepting connections.)

Answers 2

If you can change the container to wait for mysql to be ready do it.

If you don't have the control of the container that you want to connect the database to, you can try to wait for the specific port.

For that purpose, I'm using a small script to wait for a specific port exposed by another container.

In this example, myserver will wait for port 3306 of mydb container to be reachable.

# Your database mydb:   image: mysql   ports:     - "3306:3306"   volumes:     - yourDataDir:/var/lib/mysql  # Your server myserver:   image: myserver   ports:     - "....:...."   entrypoint: ./wait-for-it.sh mydb:3306 -- ./yourEntryPoint.sh 

You can find the script wait-for-it documentation here

Answers 3

I modified the docker-compose.yml as per the following example and it worked.

  mysql:     image: mysql:5.6     ports:       - "3306:3306"     volumes:              # Preload files for data       - ../schemaAndSeedData:/docker-entrypoint-initdb.d     environment:       MYSQL_ROOT_PASSWORD: rootPass       MYSQL_DATABASE: DefaultDB       MYSQL_USER: usr       MYSQL_PASSWORD: usr     healthcheck:       test:  mysql --user=root --password=rootPass -e 'Design your own check script ' LastSchema 

In my case ../schemaAndSeedData contains multiple schema and data seeding sql files. Design your own check script can be similar to following select * from LastSchema.LastDBInsert.

While web dependent container code was

depends_on:   mysql:     condition: service_healthy 

Answers 4

Hi for a simple healthcheck, I used:

/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\" 

Basically it runs a simple mysql command SHOW DATABASES; using as an example the user root with the password rootpasswd in the database.

If the command succeed the db is up and ready so the healthcheck path. You can use interval so it tests at interval.

Removing the other field for visibility, here is what it would look like in your docker-compose.yaml.

version: '2.1'    services:     db:       ...       healthcheck:         test: "/usr/bin/mysql --user=root --password=rootpasswd --execute \"SHOW DATABASES;\""         interval: 2s         timeout: 20s         retries: 10       app:        ...        depends_on:          db:          condition: service_healthy 
Read More

Thursday, August 16, 2018

Accessing Files on a Windows Docker Container Easily

Leave a Comment

Summary

So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.

Background (what I've tried):

I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end users perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.

EDIT: Using OpenSSH

Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:

ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa 

And modifying the permissions using:

icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T 

and

icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T 

Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.

I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a container is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".

EDIT: Command used:

docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container 

Running this on a ProxMox VM.

At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.

Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.

1 Answers

Answers 1

Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/

Dockerfile

FROM microsoft/windowsservercore  # need to ecape \ WORKDIR C:\\tmp\\  # Add the program from host machine to container ADD ["<source>", "C:\tmp"]  # Normally used with web servers # EXPOSE 80  # Running the program CMD ["C:\tmp\program.exe", "any-parameter"] 

Docker Composer

Should ideally be in the parent folder.

   version: "3"      services:       windows:         build: ./folder-of-Dockerfile         volume:           - ./my_volume:C:/tmp/         ports:           - 9999:9092 

Folder structure

|---docker-composer.yml     |     |---folder-of-Dockerfile         |         |---Dockerfile 

Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.

Useful link Nanage Windows Dockerfile

Read More

Wednesday, August 15, 2018

Swift build always build whole package in Docker

Leave a Comment

When using a Dockerfile like this one:

FROM swift:latest RUN mkdir foo && cd foo && swift package init RUN cd foo && swift build && swift build RUN cd foo && swift build 

when the 3rd step is run, swift build will only compile the app once, as the second execution will just use the already build objects, and the output will be a single Compile Swift Module 'foo' (1 sources)

When running the 4th step, though, it seems to ignore whatever was already build, and rebuild the whole thing again, although nothing was changed and there was no clean. I've tried running a RUN ls /foo/.build && ls /tmp and everything seems to be in place.

What I'm trying to achieve in reality, is setup my image so I first clone the project from git, build it (so this "base" layer is cached by docker), then COPY in any change from the local machine and built just the new updates, but this end up building the whole project 2 times.

Any idea?

Edit: here's what my actual Dockerfile looks like:

FROM swift:latest RUN git clone git@foo.com/foo.git RUN cd /foo && swift build COPY . /foo RUN cd /foo && swift build 

so ideally the first 3 layers will stay cached, and the last 2 would only build new changes, instead it ends up rebuilding the whole project

1 Answers

Answers 1

You need to validate that swift build is indeed capable to build incremental changes first (meaning, "in general", without involving docker)

A thread like "Compile Time Incredibly Slow" (using XCode, even with the option ""Xcode will not rebuild an entire target when only small changes have occurred." does not inspire confidence.

If swift build does rebuild everything, no amount of layer cache will avoid a full rebuild.

Read More

Tuesday, July 10, 2018

nodejs app exited with code 0 when docker-compose up

Leave a Comment

I encounter the error below when running docker-compose up

nginx_1       | 2018/07/03 06:54:17 [emerg] 1#1: host not found in upstream "admin:1123" in /etc/nginx/nginx.conf:19 nginx_1       | nginx: [emerg] host not found in upstream "admin:1123" in /etc/nginx/nginx.conf:19 nginx_1 exited with code 1 

docker-compose.yml

version: "2"  volumes:    mongostorage:    services:     app:     build: ./app     ports:       - "3000"     links:       - mongo       - redis      volumes:       - ./app:/var/www/app       - /var/www/app/node_modules    adminmongo:     build: ./adminMongo     ports:       - "4455"     links:       - mongo         command: node app.js    admin:     build: ./admin     ports:       - "1123"     links:       - mongo       - redis     command: node admin_app.js    nginx:     build: ./nginx     ports:       - "80:80"       - "1123:1123"       - "4455:4455"     links:       - app:app       - admin:admin    mongo:     image: mongo:2.4     environment:       - MONGO_DATA_DIR=/data/db     volumes:       - mongostorage:/data/db     ports:       - "27017:27017"    redis:     image: redis     volumes:       - ./data/redis/db:/data/db     ports:       - "6379:6379"     

dockerfile for app

FROM node:9.8 RUN mkdir -p /var/www/app WORKDIR /var/www/app COPY . /var/www/app RUN npm install -g gulp pm2 notify-send RUN npm install  CMD ["pm2-docker", "./bin/www"] 

dockerfile for admin

FROM node:9.8 RUN mkdir -p /var/www/sibkladmin WORKDIR /var/www/sibkladmin COPY . /var/www/sibkladmin RUN npm install -g gulp pm2 bcrypt RUN npm install  

dockerfile for nginx

FROM nginx:latest  EXPOSE 80 EXPOSE 1123 EXPOSE 4455  COPY nginx.conf /etc/nginx/nginx.conf 

nginx.conf

events {   worker_connections  1024; }  http{      upstream app.local{         least_conn;         server app:3000 weight=10 max_fails=3 fail_timeout=30s;     }      upstream app.local:4455{         least_conn;         server adminmongo:4455 weight=10 max_fails=3 fail_timeout=30s;     }      upstream app.local:1123{         least_conn;         server admin:1123 weight=10 max_fails=3 fail_timeout=30s;     }        server {         listen 80;          server_name app.local;          location / {             proxy_pass http://app.local;             proxy_http_version 1.1;             proxy_set_header Upgrade $http_upgrade;             proxy_set_header Connection 'upgrade';             proxy_set_header Host $host;             proxy_cache_bypass $http_upgrade;         }     }      server {         listen 1123;          server_name app.local:1123;          location / {             proxy_pass http://app.local:1123;             proxy_http_version 1.1;             proxy_set_header Upgrade $http_upgrade;             proxy_set_header Connection 'upgrade';             proxy_set_header Host $host;             proxy_cache_bypass $http_upgrade;         }     }      server {         listen 4455;          server_name app.local:4455;          location / {             proxy_pass http://sibklapp.local:4455;             proxy_http_version 1.1;             proxy_set_header Upgrade $http_upgrade;             proxy_set_header Connection 'upgrade';             proxy_set_header Host $host;             proxy_cache_bypass $http_upgrade;         }     } } 

Updated:

Error received after docker-compose build

npm ERR! path /var/www/admin/node_modules/bcrypt/node_modules/abbrev npm ERR! code ENOENT npm ERR! errno -2 npm ERR! syscall rename npm ERR! enoent ENOENT: no such file or directory, rename '/var/www/admin/node_modules/bcrypt/node_modules/abbrev' -> '/var/www/admin/node_modules/bcrypt/node_modules/.abbrev.DELETE' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent  npm ERR! A complete log of this run can be found in: npm ERR!     /root/.npm/_logs/2018-07-05T10_03_43_640Z-debug.log ERROR: Service 'admin' failed to build: The command '/bin/sh -c npm install' returned a non-zero code: 254 

2 Answers

Answers 1

Can you try:

NGINX dockerfile

Change:

COPY nginx.conf /etc/nginx/nginx.conf

to

COPY nginx.conf /etc/nginx/nginx.conf.d

Remove EXPOSE ports no need to expose ports.

Compose File

Update compose file to:

version: "2"  volumes:    mongostorage:    services:     app:     build: ./app     ports:       - "3000:3000"     links:       - mongo       - redis     volumes:       - ./app:/var/www/app       - /var/www/app/node_modules     command: node app.js    adminmongo:     build: ./adminMongo     ports:       - "4455:4455"     links:       - mongo         volumes:       - ./adminMongo:/var/www/adminMongo       - /var/www/adminMongo/node_modules     command: node app.js    admin:     build: ./admin     ports:       - "1123:1123"     links:       - mongo       - redis     volumes:         - ./admin:/var/www/admin       - /var/www/admin/node_modules     command: node admin_app.js    nginx:     build: ./nginx     links:       - app:app       - admin:admin    mongo:     image: mongo:2.4     environment:       - MONGO_DATA_DIR=/data/db     volumes:       - mongostorage:/data/db     ports:       - "27017:27017"    redis:     image: redis     volumes:       - ./data/redis/db:/data/db     ports:       - "6379:6379"     

NOTE I created a simple node hello world app to test, you will need to update the command: to match what you have in package.json

Also Before you start your app run these commands to clean up any old containers/networks:

    docker-compose kill     docker-compose down     docker network prune     docker volume prune 

To start the services Run:

docker-compose build

prob overkill but makes sure you're not using old containers

docker-compose up --force-recreate

Local Test Output

    mongo_1       | Wed Jul  4 21:50:35.835 [initandlisten] waiting      for connections on port 27017      mongo_1       | Wed Jul  4 21:50:35.835 [websvr] admin web console      waiting for connections on port 28017      app_1         | app listening on port 3000!     admin_1       | admin app listening on port 1123!     adminmongo_1  | AdminMongo app listening on port 4455!      redis_1       | 1:M 04 Jul 21:50:26.523 * Running mode=standalone,      port=6379. 

Answers 2

The issue indicates that you didn't install node_modules inside the Dockerfile using npm install or you overwrote those from the host using COPY . /src kind of statement.

Since the module you are using is a native module it needs to be compiled inside the docker image and not just copied from some place else as the same will not be compatible until unless the Host OS version and Image OS version are the same

Read More

Monday, July 9, 2018

Kibana container to elasticsearch cloud auth err

Leave a Comment

I have a production instance of elasticsearch 5.6.9 deployed on elastic.cloud.

WIth an http elastic all is OK but I would run a localhost kibana connected to that https instance!

I have tried:

docker run --name kibana-prod-user       -e ELASTICSEARCH_URL=https://####.eu-west-1.aws.found.io:9243       -e ELASTICSEARCH_PASSWORD=####       -v /host/workspace/cert:/usr/share/elasticsearch/config/certificates       -p 3501:5601 --b kibana 

but i get:

auth err

In my mount dir I have put the cert.cer of elastic cloud.

Any ideas?

Thank you very much

1 Answers

Answers 1

I have find the solution, after understand that the error wasn't a certificate problem.

The right script for kibana 5.6.10 is:

docker run --name kibana-prod-provider -v "$(pwd)":/etc/kibana/ -p 3502:5601 --rm kibana 

because the ELASTICSEARCH_PASSWORD envvar is not managed by the docker file, only le URL is.

Then in the $(pwd) directory I have put this kibana.yml file:

server.host: '0' elasticsearch.url: 'https://###.eu-west-1.aws.found.io:9243' elasticsearch.username: elastic elasticsearch.password: ### 
Read More

Sunday, July 8, 2018

Fail to pull docker image using LCOW via SSL

Leave a Comment

I use docker engine 18.05.0-ce-win67 (18263). On my macOS I have succeeded in pulling images from my company's private docker registry. then I

  • copied the files client.cert, client.key and ca.crt to my Windows 10 into:

    • C:\ProgramData\Docker\certs.d\docker.company.net\
    • C:\Users\<user>\.docker\certs.d\docker.company.net\
  • imported the certificates into my Windows global certificates store and user certificates store.

Sadly, I still get this:

> docker pull <company.docker.url>/<some image> Error response from daemon: Get https://<company.docker.url>/v2/: remote error: tls: handshake failure 

Two more things to notice:

  • If I switch to Windows Containers, I can successfully login or pull images, only fails with LCOW.
  • My private cert is signed by an intermediate cert, and the intermediate cert is contained in my client.cert.

Some references I have read:

2 Answers

Answers 1

I would try two things:

  • Did you tried: docker login first before to pull the images?
  • Restart the client and repeat.

Answers 2

I had this error on windows, too. In my case, restarting the docker daemon helped.

Read More

Thursday, July 5, 2018

What is happening when using ../ with docker-compose volume

Leave a Comment

I am having problems with writing files out from inside a docker container to my host computer. I believe this is a privilege issue and prefer not to set privileged: True. A work around for writing out files is by pre-pending ../ to a volume in my docker-compose.yml file. For example,

version: '3' services:     example:         volumes:          - ../:/example 

What exactly is ../ doing here? Is it taking from the container's privileges and "going up" a directory to the host machine? Without ../, I am unable to write out files to my host machine.

2 Answers

Answers 1

The statement volumes: ['../:/example'] makes the parent directory of the directory containing docker-compose.yml on the host (../) visible inside the container at /example. Host directory bind-mounts like this, plus some equivalent constructs using a named volume attached to a specific host directory, are the only way a container can write out to the host filesystem.

Answers 2

The docker build command can only access the directory it is in and lower, not higher, unless you specify the higher directory as the context.

To run the docker build from the parent directory:

docker build -f /home/me myapp/Dockerfile  

Doing the same in composer:

 #docker-compose.yml  version: '3.3'      services:    yourservice:      build:        context: /home/me        dockerfile: myapp/Dockerfile 

Or with your example:

 version: '3'  services:      build:          context: /home/me/app         dockerfile: docker/Dockerfile      example:         volumes:           - /home/me/app:/example 

Additionally you have to supply full paths, not relative paths. Ie.

- /home/me/myapp/files/example:/example  

If you have a script that is generating the Dockerfile from an unknown path, you can use:

CWD=`pwd`; echo $CWD 

To refer to the current working directory. From there you can append ..

Alternately you can build the image from a directory one up, or use a volume which you can share with an image that is run from a higher directory, or you need to output your file to stdout and redirect the output of the command to the file you need from the script that runs it.

See also: Docker: adding a file from a parent directory

Read More

Thursday, June 28, 2018

Connect to postgres in docker container from host machine

Leave a Comment

How can I connect to postgres in docker from a host machine?

docker-compose.yml

version: '2'  networks:     database:         driver: bridge services:     app:         build:             context: .             dockerfile: Application.Dockerfile         env_file:             - docker/Application/env_files/main.env         ports:             - "8060:80"         networks:            - database         depends_on:             - appdb      appdb:         image: postdock/postgres:1.9-postgres-extended95-repmgr32         environment:             POSTGRES_PASSWORD: app_pass             POSTGRES_USER: www-data             POSTGRES_DB: app_db             CLUSTER_NODE_NETWORK_NAME: appdb             NODE_ID: 1             NODE_NAME: node1         ports:             - "5432:5432"         networks:             database:                 aliases:                     - database 

docker-compose ps

           Name                          Command               State               Ports ----------------------------------------------------------------------------------------------------- appname_app_1     /bin/sh -c /app/start.sh         Up      0.0.0.0:8060->80/tcp appname_appdb_1   docker-entrypoint.sh /usr/ ...   Up      22/tcp, 0.0.0.0:5432->5432/tcp 

From container I can connect successfully. Both from app container and db container.

List of dbs and users from running psql inside container:

# psql -U postgres psql (9.5.13) Type "help" for help.  postgres=# \du                                        List of roles     Role name     |                         Attributes                         | Member of ------------------+------------------------------------------------------------+-----------  postgres         | Superuser, Create role, Create DB, Replication, Bypass RLS | {}  replication_user | Superuser, Create role, Create DB, Replication             | {}  www-data         | Superuser                                                  | {}  postgres=# \l                                        List of databases       Name      |      Owner       | Encoding |  Collate   |   Ctype    |   Access privileges ----------------+------------------+----------+------------+------------+-----------------------  app_db         | postgres         | UTF8     | en_US.utf8 | en_US.utf8 |  postgres       | postgres         | UTF8     | en_US.utf8 | en_US.utf8 |  replication_db | replication_user | UTF8     | en_US.utf8 | en_US.utf8 |  template0      | postgres         | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +                 |                  |          |            |            | postgres=CTc/postgres  template1      | postgres         | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +                 |                  |          |            |            | postgres=CTc/postgres (5 rows) 

DB image is not official postgres image. But Dockerfile in GitHub seem looking fine.

cat /var/lib/postgresql/data/pg_hba.conf from DB container:

# TYPE  DATABASE        USER            ADDRESS                 METHOD  # "local" is for Unix domain socket connections only local   all             all                                     trust # IPv4 local connections: host    all             all             127.0.0.1/32            trust # IPv6 local connections: host    all             all             ::1/128                 trust # Allow replication connections from localhost, by a user with the # replication privilege. #local   replication     postgres                                trust #host    replication     postgres        127.0.0.1/32            trust #host    replication     postgres        ::1/128                 trust  host all all all md5 host replication replication_user 0.0.0.0/0 md5 

I tried both users with no luck

$ psql -U postgres -h localhost psql: FATAL:  role "postgres" does not exist $ psql -h localhost -U www-data appdb -W Password for user www-data: psql: FATAL:  role "www-data" does not exist 

Looks like on my host machine there is already PSQL running on that port. How can I check it?

5 Answers

Answers 1

I ran this on Ubuntu 16.04

$ psql -h localhost -U www-data app_db Password for user www-data: psql (9.5.13) Type "help" for help.  app_db=# \du                                        List of roles     Role name     |                         Attributes                         | Member of ------------------+------------------------------------------------------------+-----------  postgres         | Superuser, Create role, Create DB, Replication, Bypass RLS | {}  replication_user | Superuser, Create role, Create DB, Replication             | {}  www-data         | Superuser                                                  | {} 

And below from my mac to the VM inside which docker was running (192.168.33.100 is the IP address of the docker VM)

$ psql -h 192.168.33.100 -U www-data app_db Password for user www-data: psql (9.6.9, server 9.5.13) Type "help" for help.  app_db=# \du                                        List of roles     Role name     |                         Attributes                         | Member of ------------------+------------------------------------------------------------+-----------  postgres         | Superuser, Create role, Create DB, Replication, Bypass RLS | {}  replication_user | Superuser, Create role, Create DB, Replication             | {}  www-data         | Superuser                                                  | {} 

They both work for me.

PSQL version on VM

$ psql --version psql (PostgreSQL) 9.5.13 

PSQL version on Mac

$ psql --version psql (PostgreSQL) 9.6.9 

Working

Answers 2

I have a relatively similar setup, and the following works for me to open a psql session on the host machine into the docker postgres instance: docker-compose run --rm db psql -h db -U postgres -d app_development

Where:

  • db is the name of the container
  • postgres is the name of the user
  • app_development is the name of the database

So for you, it would look like docker-compose run --rm appdb psql -h appdb -U www-data -d app_db.

Answers 3

Since you’re running it in OSX, you can always use the pre-installed Network Utility app to run a Port Scan on your host and identify if the postgres server is running (and if yes, on which port).

But I don’t think you have one running on your host. The problem is that Postgres by default runs on 5432 and the docker-compose file that you are trying to run exposes the db container on the same port i.e. 5432. If the Postgres server were already running on your host, then Docker would have tried to expose a a container to a port which is already being used, thereby giving an error.

Another potential solution:
As can be seen in this answer, mysql opens a unix socket with localhost and not a tcp socket. Maybe something similar is happening here.

Try using 127.0.0.1 instead of localhost while connecting to the server in the container.

Answers 4

I believe you have an issue in pg_hba.conf. Here you've specified 1 host that has access - 127.0.0.1/32.

You can change it to this:

# IPv4 local connections: host    all             all             0.0.0.0/0            md5 

This will make sure your host (totally different IP) can connect.

To check if there is an instance of postgresql already running, you can do netstat -plnt | grep 5432. If you get any result from this you can get the PID and verify the process itself.

Answers 5

I believe the problem is you have postgres running on the local machine at port 5432. Issue can be resolved by mapping port 542 of docker container to another port in the host machine. This can be achieved by making a change in docker-compose.yml

Change

"5432:5432"  

to

"5433:5432" 

Now the docker container postgres is running on 5433. You can try connecting to it.

psql -p 5433 -d db_name -U user -h localhost 
Read More

Monday, June 25, 2018

Docker forever in “Docker is starting..” at windows task

Leave a Comment

I have install docker stable version and it took forever to start. Until now i have not seen the notification showing Docker is running. Can only see the docker icon at the task bar, showing Docker is starting.

I am running on windows 10 Pro, Intel core 2 duo E8500, supporting virtualization.

Please help.

Thanks.

2 Answers

Answers 1

This is followed by docker/for-win issue 487 and mostly: issue 482.

The Diagnose and Feedback menu should allow you to access the logs which are in:

 %LOCALAPPDATA%\Docker\log.txt 

It will generate a zip file with said logs and other information.

The default recommendation is:

But sometimes, all the options in the "Reset" pane are grayed out.

For testing, desactivating the AV (AntiVirus) is an option (again, just to be tested).

Check also the state of your Network adapater in the device manager.

If you have a third-party network product like a VPN (for instance https://www.zerotier.com/), try and uninstall it before restarting docker.

Resetting Hyper-V could help:

Go to "Turn Windows features on or off", disable all Hyper-V related features, reboot, then Docker should ask if it can enable and reboot for you.
Let it do that and see if it's fixed. If not I'd probably try manually re-enabling Hyper-V.

Similarly:

I had a problem with most recent version. I uninstalled it, removed all docker folders and server and virtual switch from hyper-v and then reinstalled and it worked.

Check if you don't have some IP address already in use.
Finally, you can perform some Hyper-V tests.

Answers 2

UPDATE

Looks like in Docker for Windows version: 17.09.0-ce-win33 (13620) they fixed the problem


This is an annoying problem that docker for Windows has. The latests versions have minimized it a lot but it still happens.

  1. Check if docker for Windows will start when windows starts (this is the default behavior) if not check it.
  2. Shutdown the machine. No restart. Shutdown.

Everytime you find this problem just shutdown the machine. The next time Windows boot docker will start very fast.

I know it looks esoteric but it works.

Regards Carlos

Read More

Sunday, June 24, 2018

Unable to increase Max Application Master Resources

Leave a Comment

I am using uhopper/hadoop docker image to create yarn cluster. I have 3 nodes with 64GB RAM per node. I have added configuration. I have given 32GB to yarn. So total cluster memory is 96GB.

 -    name: YARN_CONF_yarn_scheduler_minimum___allocation___mb       value: "2048"     - name: YARN_CONF_yarn_scheduler_maximum___allocation___mb       value: "16384"     - name:  MAPRED_CONF_mapreduce_framework_name       value: "yarn"      - name: MAPRED_CONF_mapreduce_map_memory_mb       value: "8192"     - name: MAPRED_CONF_mapreduce_reduce_memory_mb       value: "8192"     - name: MAPRED_CONF_mapreduce_map_java_opts       value: "-Xmx8192m"     - name: MAPRED_CONF_mapreduce_reduce_java_opts       value: "-Xmx8192m"     - name: YARN_CONF_yarn_nodemanager_resource_memory___mb       value: "32768" 

Max Application Master Resources is 10240 MB. I ran 5 spark jobs with each 3 GB driver memory, 2 jobs never came in RUNNING state due 10240MB. I am unable to fully utilize my hardware.

enter image description here

How I can increase the Max Application Master Resources memory ?

enter image description here

2 Answers

Answers 1

I hope, i found an answer, if you change yarn.scheduler.capacity.maximum-am-resource-percent then Max Application Master Resources will change. Here's a documentation - Setting Application Limits from docs.hortonworks.com

Let me know if it worked.

Answers 2

To change the Maximum Application Master resources, you have to change the percentage of yarn.scheduler.capacity.maximum-am-resource-percent , which is by default 0.2 which means 20% of the memory allocated to Yarn.

If I am not wrong, the total memory given to YARN is 10240 MB(10GB), and if the maximum percentage Application master can use is 20% then it makes the memory allocated to AM 2GB.

Now, if you want to allocate more memory to your application-master then simply increase the percentage. But it is recommended that your AM percentage should not be more than 0.5. Hope it makes it clear now.

Read More

Thursday, June 21, 2018

Why use Docker for .NET web apps when we have WebDeploy?

Leave a Comment

In the Microsoft ecosystem, people were happily deploying web apps using WebDeploy Packages until Docker came along. Suddenly everyone started preferring to use Docker instead, with articles being written telling how to WebDeploy into a Docker image.

I've searched this article (and others) for the word "why" and haven't found an explanation, leaving me to infer that the answer is just "because Docker."

I'm probably oversimplifying, but it seems that WebDeploy Packages and Docker images serve similar purposes for deployment, and it's unclear to me why I would want to take a perfectly good WebDeploy Package and put it in a Docker image. What am I missing? What additional benefits does Docker bring above and beyond what we have with WebDeploy? When should I choose one over the other, or use both together?

1 Answers

Answers 1

One of docker feature is to record an execution environment in an archive called an image.

That way, you don't have to setup the exact same configuration on a new machine, you can simply run said image on any machine supporting docker, and presumably get the exact same environment execution (same Windows, same Webdeploy version, same IIS, ...)

A WebDeploy Packages is a deployment artifact (like a jar, war, or any other artifacts), which does not include what is needed to run said artifact.

A docker image includes everything already installed, ready to be executed.
You can have the same image used at runtime (docker run) with:

Read More

Wednesday, June 20, 2018

Nginx pointed to wrong directory with Docker on Windows

Leave a Comment

I'm setting up a Laravel application with Docker, using a Docker image configuration I found here: https://blog.pusher.com/docker-for-development-laravel-php/

Now, this works fine on my Ubuntu machine (16.04), but on Window (10 Pro) I get a weird error. It first complains about not finding a composer.json file. Then, with each request I make to localhost:8000, I get the following error:

15#15: *1 open() "/var/www/public404" failed (2: No such file or directory), client: 172.17.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8000" 

I am very new to this, but it seems that nginx points to /var/www/public404 - I have no idea how that "404" got there. I have a feeling it has to do with the line try_files $uri = 404; in the site.conf file, however, I don't really know how that works and I don't want to break it... The weird thing is that this works with Ubuntu, but not on Windows (or maybe that's not weird at all?).

I use docker build . -t my-image to build the image and docker run -p 8000:80 --name="my-container" my-image to run a container using the image.

The EOL of all the config files is set to line feed. Does anybody have any idea how I might fix this?

Dockerfile

FROM nginx:mainline-alpine LABEL maintainer="John Doe <john@doe>"  COPY start.sh /start.sh COPY nginx.conf /etc/nginx/nginx.conf COPY supervisord.conf /etc/supervisord.conf COPY site.conf /etc/nginx/sites-available/default.conf  RUN apk add --update \ php7 \ php7-fpm \ php7-pdo \ php7-pdo_mysql \ php7-mcrypt \ php7-mbstring \ php7-xml \ php7-openssl \ php7-json \ php7-phar \ php7-zip \ php7-dom \ php7-session \ php7-tokenizer \ php7-zlib && \ php7 -r "copy('http://getcomposer.org/installer', 'composer-setup.php');" && \ php7 composer-setup.php --install-dir=/usr/bin --filename=composer && \ php7 -r "unlink('composer-setup.php');" && \ ln -s /etc/php7/php.ini /etc/php7/conf.d/php.ini  RUN apk add --update \ bash \ openssh-client \ supervisor  RUN mkdir -p /etc/nginx && \ mkdir -p /etc/nginx/sites-available && \ mkdir -p /etc/nginx/sites-enabled && \ mkdir -p /run/nginx && \ ln -s /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf && \ mkdir -p /var/log/supervisor && \ rm -Rf /var/www/* && \ chmod 755 /start.sh  RUN sed -i -e "s/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g" \ -e "s/variables_order = \"GPCS\"/variables_order = \"EGPCS\"/g" \ /etc/php7/php.ini && \ sed -i -e "s/;daemonize\s*=\s*yes/daemonize = no/g" \ -e "s/;catch_workers_output\s*=\s*yes/catch_workers_output = yes/g" \ -e "s/user = nobody/user = nginx/g" \ -e "s/group = nobody/group = nginx/g" \ -e "s/;listen.mode = 0660/listen.mode = 0666/g" \ -e "s/;listen.owner = nobody/listen.owner = nginx/g" \ -e "s/;listen.group = nobody/listen.group = nginx/g" \ -e "s/listen = 127.0.0.1:9000/listen = \/var\/run\/php-fpm.sock/g" \ -e "s/^;clear_env = no$/clear_env = no/" \ /etc/php7/php-fpm.d/www.conf  EXPOSE 443 80 WORKDIR /var/www  CMD ["/start.sh"] 

start.sh

#!/bin/bash  # ---------------------------------------------------------------------- # Create the .env file if it does not exist. # ----------------------------------------------------------------------  if [[ ! -f "/var/www/.env" ]] && [[ -f "/var/www/.env.example" ]]; then cp /var/www/.env.example /var/www/.env fi  # ---------------------------------------------------------------------- # Run Composer # ----------------------------------------------------------------------  if [[ ! -d "/var/www/vendor" ]]; then cd /var/www composer update composer dump-autoload -o fi  # ---------------------------------------------------------------------- # Start supervisord # ----------------------------------------------------------------------  exec /usr/bin/supervisord -n -c /etc/supervisord.conf 

site.conf

server { listen 80;  root /var/www/public; index index.php index.html;  location / { try_files $uri $uri/ /index.php?$query_string; }  location ~ /\. { deny all; }  location ~ \.php$ { try_files $uri = 404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } 

nginx.conf

user nginx; worker_processes 1;  error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid;  events { worker_connections 1024; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log off; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/sites-enabled/*.conf; } 

supervisord.conf

[unix_http_server] file=/dev/shm/supervisor.sock  [supervisord] logfile=/tmp/supervisord.log logfile_maxbytes=50MB logfile_backups=10 loglevel=warn pidfile=/tmp/supervisord.pid nodaemon=false minfds=1024 minprocs=200 user=root  [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface  [supervisorctl] serverurl=unix:///dev/shm/supervisor.sock  [program:php-fpm7] command = /usr/sbin/php-fpm7 --nodaemonize --fpm-config /etc/php7/php-fpm.d/www.conf autostart=true autorestart=true priority=5 stdout_logfile=/dev/stdout stdout_logfile_maxbytes=0 stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0  [program:nginx] command=/usr/sbin/nginx -g "daemon off;" autostart=true autorestart=true priority=10 stdout_logfile=/dev/stdout stdout_logfile_maxbytes=0 stderr_logfile=/dev/stderr stderr_logfile_maxbytes=0 

1 Answers

Answers 1

site.conf

server {   listen 80 default_server;   root /var/www/public;    index index.php;   server_name localhost;    location / {     try_files $uri /index.php?$query_string;   }    location ~* \.php$ {     fastcgi_split_path_info ^(.+\.php)(.*)$;     fastcgi_pass 127.0.0.1:9000;     fastcgi_params SCRIPT_FILENAME $document_root$fastcgi_script_name;     include fastcgi_params;   } } 
Read More

Monday, June 18, 2018

Investigating Docker connectivity issue

Leave a Comment

I am trying to reach host-x.com from docker container running on MacOS but it fails:

$ docker run ubuntu:latest \     /bin/bash -c \    'apt-get update &&      apt-get -y install netcat &&      nc -v -z -w 3 host-x.com 443  &> /dev/null && echo "Online" || echo "Offline"'  Offline 

It works fine when:

  • I run a docker container in another machine:

    Online 
  • I run it on my Mac, outside of a docker container:

     nc -v -z -w 3 host-x.com 443  &> /dev/null && echo "Online" || echo "Offline"'        Online 
  • I run it on my Mac from docker container, for other target hosts:

    $ docker run ubuntu:latest \     /bin/bash -c \    'apt-get update &&     apt-get -y install netcat &&     nc -v -z -w 3 www.google.com 443  &> /dev/null && echo "Online" || echo "Offline"'     Online 

UPDATE #1

  1. As suggested I logged in into container and checked DNS. Host name is correctly resolved:

    root@55add56ecc11:/# ping host-x.com PING s1-host-x.com (172.22.187.101) 56(84) bytes of data. 
  2. However, ping packages are not delivered. I though this could be caused by the conflict of IP range in internal docker network and corporate network (172.17.X.X). I tried to fix the docker bridge IP address in my daemon configuration and re-check the connectivity but it didn't help:

    "bip" : "10.10.10.1/8" 
  3. I checked with 3 other persons in my company (4 in total including me). 50% has access to this host (Online), 50% doesn't (Offline).

  4. I tried what @mko suggested, using netcat in interactive mode inside the container. Still timeout.

     root@37c61acc5aa5:/# nc -v -z -w 3 host-x.com 443  s1-host-x.com [172.22.187.101] 443 (?) : Connection timed out   
  5. I tried tracing the route but no success:

    traceroute -m 10 -w 1 host-x.com traceroute to host-x.com (172.22.187.101), 10 hops max, 60 byte packets  1  10.10.10.1 (10.10.10.1)  0.444 ms  0.388 ms  0.364 ms  2  * * *  3  * * *  4  * * *  5  * * *  6  * * *  7  * * *  8  * * *  9  * * * 10  * * * 

How can I investigate that?

0 Answers

Read More

Sunday, June 3, 2018

Karma Chrome Headless not working on Jenkins

Leave a Comment

When I run the below setup with Docker locally on my mac everything works fine.

But same setup does not work on Jenkins running on Ubuntu 16.04

ChromiumHeadless have not captured in 60000 ms, killing.

Following error log is from Jenkins console:

25 05 2018 06:35:09.076:INFO [karma]: Karma v2.0.2 server started at http://0.0.0.0:9222/ 25 05 2018 06:35:09.079:INFO [launcher]: Launching browser Chromium_no_sandbox with unlimited concurrency 25 05 2018 06:35:09.090:INFO [launcher]: Starting browser ChromiumHeadless 25 05 2018 06:36:09.128:WARN [launcher]: ChromiumHeadless have not captured in 60000 ms, killing. 25 05 2018 06:36:09.139:INFO [launcher]: Trying to start ChromiumHeadless again (1/2). 25 05 2018 06:37:09.140:WARN [launcher]: ChromiumHeadless have not captured in 60000 ms, killing. 25 05 2018 06:37:09.147:INFO [launcher]: Trying to start ChromiumHeadless again (2/2). 

Package.json ... "testProd": "./node_modules/karma/bin/karma start karma.conf-prod.js --single-run",

Dockerfile

FROM zenika/alpine-node:latest LABEL name="product-web"  # Update apk repositories RUN echo "http://dl-2.alpinelinux.org/alpine/edge/main" > /etc/apk/repositories RUN echo "http://dl-2.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories RUN echo "http://dl-2.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories  # Install chromium RUN apk -U --no-cache \     --allow-untrusted add \     zlib-dev \     chromium \     xvfb \     wait4ports \     xorg-server \     dbus \     ttf-freefont \     mesa-dri-swrast \     grep \     udev \     && apk del --purge --force linux-headers binutils-gold gnupg zlib-dev libc-utils \     && rm -rf /var/lib/apt/lists/* \     /var/cache/apk/* \     /usr/share/man \     /tmp/* \     /usr/lib/node_modules/npm/man \     /usr/lib/node_modules/npm/doc \     /usr/lib/node_modules/npm/html \     /usr/lib/node_modules/npm/scripts  WORKDIR /home/dev/code COPY . .  #RUN rm -rf node_modules && npm cache clear --force  ENV CHROME_BIN=/usr/bin/chromium-browser ENV CHROME_PATH=/usr/lib/chromium/  RUN npm install RUN npm run testProd && npm run buildProd 

karma.conf-prod.js

const path = require('path'); module.exports = function(config) {     config.set({         basePath: '',         browsers: ['ChromeHeadlessNoSandbox'],     customLaunchers: {         ChromeHeadlessNoSandbox: {             base: 'ChromeHeadless',             flags: [                 '--no-sandbox',                 '--user-data-dir=/tmp/chrome-test-profile',                 '--disable-web-security'             ]         }     },         frameworks: ['mocha', 'chai'],         captureConsole: true,         files: [             'node_modules/babel-polyfill/dist/polyfill.js',             'test/root.js'         ],         preprocessors: {             'src/index.js': ['webpack', 'sourcemap'],             'test/root.js': ['webpack']         },         webpack: {             devtool: 'inline-source-map',             module: {                 loaders: [                     {                         test: /\.js$/,                         loader: 'babel-loader',                         exclude: path.resolve(__dirname, 'node_modules'),                         query: {                             plugins: ['transform-decorators-legacy', 'transform-regenerator'],                             presets: ['env', 'stage-1', 'react']                         }                     },                     {                         test: /\.json$/,                         loader: 'json-loader',                     },                 ]             },             externals: {                 'react/addons': true,                 'react/lib/ExecutionEnvironment': true,                 'react/lib/ReactContext': true             }         },         webpackServer: {             noInfo: true         },         reporters: ['spec'],         port: 9222,         logLevel: config.LOG_INFO     }); }; 

I even tried with logLevel: config.LOG_DEBUG but did not show anything missing or unusual.

2 Answers

Answers 1

Based on issue Karma 1.6 breaks Headless support for Chrome created on github, it is related to the slower machine and happens, because it took > 60 seconds before test bundle was parsed and executed by Chrome and therefore test run was started and communicated back to Karma server. Reasons why it may take long vary.

There are 2 ways to handle timeout:

Investigate why your test bundle loads >60 seconds and make sure it loads faster.

  1. Increase browserNoActivityTimeout to highter value, so test bundle has enough time to load.
  2. This particular appearance of timeout does not seem to be a Karma issue, but rather problem in the project or misconfiguration.

Based on the Derek's comment

There was a connection that was disconnecting too soon.

He found that in /static/karma.js, when the socket was created, there was a timeout value that is hardcoded to 2 seconds (see below). He just added another 0 to make it 20 seconds and the connection stayed open long enough for the server to respond to the initial request. karma/client/main.js

Lines 14 to 20 in e79463b

var socket = io(location.host, {     reconnectionDelay: 500,     reconnectionDelayMax: Infinity,     timeout: 2000,     path: KARMA_PROXY_PATH + KARMA_URL_ROOT.substr(1) + 'socket.io',     'sync disconnect on unload': true   })  

The next problem he faced was that Karma thought there was no activity even though there was traffic going back and forth on the socket. To fix that he just added browserNoActivityTimeout: 60000 to the Karma configuration.

You need to change the timeout configuration more then that is in the configuration file.

Answers 2

This is not a problem with Jenkins or from Docker according to my current understanding of the situation.

Read More