Showing posts with label shell. Show all posts
Showing posts with label shell. Show all posts

Tuesday, September 11, 2018

Unable to run cygwin in Windows Docker Container

Leave a Comment

I've been working with Docker for Windows, attempting to create a Windows Container that can run cygwin as the shell within the container itself. I haven't had any luck getting this going yet. Here's the Dockerfile that I've been messing with.

# escape=` FROM microsoft/windowsservercore  SHELL ["powershell", "-command"]  RUN Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression RUN choco install cygwin -y RUN refreshenv RUN [Environment]::SetEnvironmentVariable('Path', $env:Path + ';C:\tools\cygwin\bin', [EnvironmentVariableTarget]::Machine) 

I've tried setting the ENTRYPOINT and CMD to try and get into cygwin, but neither seems to do anything. I've also attached to the container with docker run -it and fired off the cygwin command to get into the shell, but it doesn't appear to do anything. I don't get an error, it just returns to the command prompt as if nothing happened.

Is it possible to run another shell in the Windows Container, or am I just doing something incorrectly?

Thanks!

2 Answers

Answers 1

You don't "attach" to a container with docker run: you start a container with it.

In your case, as seen here, docker run -it is the right approach.

You can try as an entry point using c:\cygwin\bin\bash, as seen in this issue.

As commented in issue 32330:

Don't get me wrong, cygwin should work in Docker Windows containers.

But, it's also a little paradoxical that containers were painstakingly wrought into Windows, modeled on containers on Linux, only for people to then want to run Linux-utils in these newly minted Docker Windows containers...

That same issue is still unresolved, with new case seen in May and June 2018:

We have an environment that compiles with Visual Studio but still we want to use git and some very useful commands taken from linux.
Also we use of-the-shelve utilities (e.g. git-repo) that uses linux commands (e.g. curl, grep,...)

Some builds require Cygwin like ICU (a cross-platform Unicode based globalization library), and worst: our builds require building it from source.


You can see an example of a crash in MSYS2-packages issue 1239:

Step 5/5 : RUN "C:\\msys64\\usr\\bin\\ls.exe"  ---> Running in 5d7867a1f8da The command 'cmd /S /C "C:\\msys64\\usr\\bin\\ls.exe"' returned a non-zero code: 3221225794 

This can get more information on the crash:

PS C:\msys64\usr\bin>    Get-EventLog -Index 28,29,30 -LogName "Application" | Format-List -Property * 

The workaround was:

PS > xcopy /S C:\Git C:\Git_Copy PS > C:\Git_Copy\usr\bin\sh.exe --version > v.txt PS > type v.txt 

As mentioned in that thread, the output gets lost somewhere in the container, thus sending it to a text file.

Answers 2

Mainly the solution is to use winpty ahead of mintty

Read More

Sunday, June 17, 2018

“Sudo su - weblogic” via a Java Program?

Leave a Comment

I am trying to connect my remote unix machine and execute some ssh commands using a java program.

connection = new Connection(hostname);                                                   connection.connect(); boolean isAuthenticated = connection.authenticateWithPassword(username, password); if (isAuthenticated == false)     throw new IOException("Authentication failed.");     Session session = connection.openSession(); session.execCommand("sudo su - weblogic");   

Here it needs password again & ofcrs, I can't provide because there is no terminal. So created a user.sh file @ my unix user home direcotry (/home/..../bharat) with below content.

echo <mypassword> | sudo -S su - weblogic sudo -S su - weblogic 

but now if I call bash user.sh like below

session.execCommand("bash user.sh");  

after logging in with my user in java, it gives below error & could not figure out the resolution for this yet.

sudo: sorry, you must have a tty to run sudo sudo: sorry, you must have a tty to run sudo 

Please help :)

1 Answers

Answers 1

As you and @rkosegi say, su needs a terminal session for the password.

It looks like the Ganymed SSH-2 library in the example? This has an option for a shell session. Clearly you now need to handle reading and writing through stdout and stdin directly though.

For example, with a couple of methods to keep it simpler:

public class SshTerminal {     private Connection connection;     private Session session;      private Reader reader;     private PrintWriter writer;     private String lastResponse;      public SshTerminal(String hostname, String username, String password)             throws JSchException, IOException {         connection = new Connection(hostname);         connection.connect();         boolean isAuthenticated = connection.authenticateWithPassword(username,                 password);         if (isAuthenticated == false)             throw new IOException("Authentication failed.");         session = connection.openSession();         session.requestDumbPTY();         session.startShell();          writer = new PrintWriter(session.getStdin());         reader = new InputStreamReader(session.getStdout());     }      public void send(String command) {         writer.print(command + "\n");         writer.flush();     }      public void waitFor(String expected) throws IOException {         StringBuilder buf = new StringBuilder();         char[] chars = new char[256];         while (buf.indexOf(expected) < 0) {             int length = reader.read(chars);             System.out.print(new String(chars, 0, length));             buf.append(chars, 0, length);         }          int echoEnd = buf.indexOf("\n");         int nextPrompt = buf.lastIndexOf("\n");         if (nextPrompt > echoEnd)             lastResponse = buf.substring(echoEnd + 1, nextPrompt);         else             lastResponse = "";     }      public String getLastResponse() {         return lastResponse;     }      public void disconnect() {         session.close();         connection.close();     } } 

This then worked fine:

    SshTerminal term = new SshTerminal(host, username, password);      term.waitFor("$ ");     term.send("su -");     term.waitFor("Password: ");     term.send(rootPassword);     term.waitFor("# ");     term.send("ls /root");     term.waitFor("# ");     term.send("cat /file-not-found 2>&1");     term.waitFor("# ");      term.send("cat /var/log/messages");     term.waitFor("# ");     String logFileContent = term.getLastResponse();      term.send("exit");     term.waitFor("$ ");     term.send("exit");      term.disconnect();      String[] lines = logFileContent.split("\n");     for (int i = 0; i < lines.length; i++)         logger.info("Line {} out of {}: {}", i + 1, lines.length, lines[i]); 

That includes examples of parsing the lines in a response, and forcing error output through.

Clearly some of the responses there might be different in your environment.

Read More

Sunday, April 8, 2018

How to store templates for non-php files in Laravel?

Leave a Comment

We can store PHP template files using blade templating engine in Laravel. But, I want to create a config file on a remote server having more than 20-30 lines each.

Till now, I was doing this using Perl. I used to execute Perl file that used to dump contents in one file and I used to pass variables as parameters.

Now, I want to do it without using Perl. I tried looking for a solution but failed. To make it easy to understand, Here is what I am trying to do exactly!

I want to create the following config file on a remote server (Just an example).

Example.conf

<VirtualHost *:80>     ServerName example.com     ServerAlias www.example.com </VirtualHost> 

Here, example.com and www.example.com will vary in every config file.

Now, I want to create this file from my laravel application to the remote server. Is there any way I can store template of this config file and can compile and put file on remote server?

I know how can I put this file on Remote server. I just want to know the best possible way to store template and customize it when needed.

3 Answers

Answers 1

You can put it into blade template like server-config.blade.php and then when you want to place it on a a server you just call:

\File::put('place-on-the-server.conf', view('server-config')->render()); 

which will generate content based on the blade template (so you can pass variables to this template).

Answers 2

As of my understanding, is that in your senario is that you have a website where people can request websites and you want to auto-generate the virtual-host file inside your application and then transfer this to the remote server?

May the above be the case then you have a lot of options.

1 - The answer of Filip

2 - Console commands

You can generate your file anyway you like. I personally have a structure in my laravel applications where I save blank files (templates). I then copy it over, and replace it with given parameters over console command, Artisan Commands

3 - Server scripts

Another way could be using Laravel Envoy. You will have a script on the server that will generate the file needed, and just call a function in your Envoy file of laravel to have it execute it. The good part about this is that you can make a new user on the server specifically for envoy and only allow it to run specific commands. This way theres very little way of messing up a server.Envoy

Answers 3

I have done something similar previously for Laravel by just creating regular function in PHP.

So what I did is having a template like the one you have and call it Example.conf where it is accessible:

<VirtualHost *:80>     ServerName example     ServerAlias www.example </VirtualHost> 

Than create a function in your (business logic) controller that reads -> modify -> save the file, some thing like:

$file = "./Example.conf";  modifyConfigFile("MyNewDom.Net", readConfigFile($file));  function readConfigFile($file) {     $myFile = fopen($file, "r") or die("Unable to open file!");     $content = fread($myFile, filesize($file));     fclose($myFile);     return $content; }  function modifyConfigFile($domain, $content) {     $myFile = fopen("$domain.conf", "w") or die("Unable to open file!");     $txt = str_replace("example", $domain, $content);     fwrite($myFile, $txt);     fclose($myFile); } 

This will result in creating new file called MyNewDom.Net.conf with following content:

<VirtualHost *:80>     ServerName MyNewDom.Net     ServerAlias www.MyNewDom.Net </VirtualHost> 

Than you can transfer it to server as you mentioned in your question.

The code here is just for inspiration and not intend to be final solution, you can make your modification to final fit your project.

Read More

Monday, August 7, 2017

Java Console Pushes Input Text When Logging Occurs

Leave a Comment

Through MobaXterm's SSH feature, I'm running a Java application on a remote Linux server. A problem arises when I attempt to type into the terminal (to process user input requests via Scanner) and any logging occurs. The text I'm typing is automatically pushed into the logging section when any print statements happen.

Clarifying example:

  1. I manually type "MY_INPUT_TO_SET_SOME_VARIABLE 50" into the console (and never press ENTER).

    enter image description here

  2. Some logging on the server occurs and automatically "sends" the manually typed "MY_INPUT_TO_SET_SOME_VARIABLE 50" into the display area.

    enter image description here

    (above, you can see 50 is appended to 09:08 when I never pressed enter).

The desired behavior is to allow the power user to simply type text in the terminal's text area (or somewhere reasonable) until the ENTER key is pressed. The text in the terminal's text area should not automatically be pushed upon logged or printed statements. I looked in terminal settings and wasn't able to find anything to modify this behavior.

1 Answers

Answers 1

As others already mentioned in the comment section there is not much you can do about that behaviour.

However usually you don't want logging on the tty you're working with.

If you have root rights on the system you connect to try to suppress the log messages on the console and redirect them to a logfile unless there is a good reason not to do so. Since it depends who is sending the messages the method to do so differs.

Another possibility is to start a screen session in your terminal to open a new tty. For ease of use I would connect directly into a screen session:

ssh -t user@server /usr/bin/screen 

If you create a .screenrc file in the home directory of the user you connect to, put

startup_message off 

in it if you don't like the screen start message. You can even start your console app with it, so that the screen session ends when you stop your app.

ssh -t user@server /usr/bin/screen your_start_command_here 

Screen has more features like naming a session, reattaching to a session etc. See the manual for further details.

(The screen solution apparently only works if the log messages on the screen are not produced by your application. In that case configure your logger that it does not log to stdout)

Read More

Thursday, July 20, 2017

Is there an equivalent to 'adb shell input keyboard text' for iOS?

Leave a Comment

For Android devices, we can use the Android Debug Bridge to invoke the input program and send arbitrary strings so that the device will react as though the text was typed by the user on the device.

For iOS, the closest hack I have found is to make Linux look like a wireless keyboard and that particular hack seems to no longer work with the latest iPad on Ubuntu 13.10. Moreover, even if it worked, it would be less flexible than input keyboard text because one could not copy and paste a string to send.

Is there an iOS equivalent to adb shell input keyboard text?

0 Answers

Read More

Monday, May 15, 2017

How to fetch no of commits made by developer for a day in Git repository for a particular branch

Leave a Comment

Im trying to send a report which contains Count of commits done by developers everyday in git repository.

#read all the inputs read -p "Enter the Branch Name:" branchname read -p "Enter the from date:" frmdate read -p "Enter the To date:" todate  #execute the command to get the commit history git log origin/$branchname --name-status  --pretty=format:"%cn committed %h on %cd full" --after="$frmdate 00:00" --before="$todate 23:59" --decorate |  git shortlog -s -n > history.txt 

This script help me to create a file which contains what are the files changed and by whom on a given date. But i need the count of commits made by indvidual devlopers.

I tried with git shortlog -s -n, It gives the overall commit count by developer in all branches.

Need to create a report to get the commit count of each developer in a daily basis

4 Answers

Answers 1

Well.... what I would do is:

  • Get the list of developers who worked on the branch since yesterday.
  • Pipe that list into a while so that you can get what each one did

It would be something like:

the_date=$( date +%F ) git log --pretty="%ae" --since=yesterday the-branch | sort | uniq | while read author; do     git log --author=$author --since-yesterday the-branch > "$the_date"_"$author".txt done 

If you need more information (like the files that were changed and so on, just add more options to the log call inside the while cycle.

Answers 2

Try this in one line (as one command):

git log --pretty="%cd %aE" --date='format:%Y-%m-%d' BRANCH | sort -r | uniq -c | grep AUTHOR_YOU_ARE_INTERESTED_IN 

Example output:

  1 2017-05-10 sylvie@bit-booster.com   2 2017-04-13 sylvie@bit-booster.com   1 2017-03-30 sylvie@bit-booster.com   1 2017-03-03 sylvie@bit-booster.com   2 2017-01-24 sylvie@bit-booster.com   1 2016-12-14 sylvie@bit-booster.com   1 2016-11-23 sylvie@bit-booster.com   1 2016-11-21 sylvie@bit-booster.com   1 2016-11-18 sylvie@bit-booster.com   3 2016-11-16 sylvie@bit-booster.com 

Missing dates in the report imply no commits for that person on that branch on the missing dates.

The number on the far left (1, 2, 1, 1, etc...) is the # of commits that author had committed on that day.

Answers 3

git shortlog can produce a report of commit counts per developer within a range of commits. Given start and end dates, you could find the SHA1 to use as range endpoints using git rev-list, for example:

start=$(git rev-list -n1 master --before START_DATE) end=$(git rev-list -n1 master --before END_DATE) git shortlog -sn $start..$end 

Answers 4

I think the code block below should work for you.

#read all the inputs read -p "Enter the Branch Name:" branchname read -p "Enter the from date:" frmdate read -p "Enter the To date:" todate  #execute the command to get the commit history git log origin/$branchname --pretty=format:"%cn %ci" \ --after="$frmdate 00:00" --before="$todate 23:59"| gawk '{arr[$2][$1]++}   END{     PROCINFO["sorted_in"] = "@ind_str_desc";     for (date_arr in arr){         printf("%s:\n", date_arr);         PROCINFO["sorted_in"] = "@val_num_desc";         for (author in arr[date_arr]){             printf("\t%s: %s\n", author, arr[date_arr][author]);         }     }   }' echo "==================================" git shortlog -s -n 

The logic is:

  1. Get commits rows with 2 columns: commit author and commit date;
  2. Make a SQL-like group by and order by query with help of gawk.

*Notice that this doesnot work for author name with whitespace in it.

Read More

Monday, March 20, 2017

Downloading and running docker images from S3 to an ec2 instance

Leave a Comment

I'm new to CloudFormation.

I have a script that creates a stack and an instance perfectly.

I now have a shell script to use to add an application to an ec2 instance. However this script keeps failing to download from s3.

Not sure what to do next and a lot of similar questions online have contradictory answers. Not sure where to go with this now. Can someone help me out?

Is using aws s3 cp s3: the wrong move?

"Type": "AWS::EC2::Instance",       "Metadata": {         "UserData": {           "Fn::Base64": {             "Fn::Join": [               "",               [                            "#!/bin/bash -e\n",                 "aws s3 cp s3://path/automation.tar.gz/tmp\n",                 "aws s3 cp s3://path/oracle-instance.tar.gz/tmp\n",                 "aws s3 cp s3://path/DockerTools-4.5.0.0-a83.tar.gz/tmp\n",                 "cd /tmp\n",                 "tar -xzvf DockerTools-4.5.0.0-a83.tar.gz\n",                 "cd 4.5.0.0-a83\n",                 "chmod u+x *\n",                 "sudo ./dockerTools.sh installFull\n"               ]             ]           }         }, 

1 Answers

Answers 1

Using aws s3 cp in your instance's UserData cloudinit script is a good approach for downloading/installing an object from S3 onto a new EC2 instance. However, you need to ensure that your EC2 instance has the necessary S3 permissions to access the object being downloaded, which you can do with an IAM Role for EC2.

To setup an IAM Role for EC2 from a CloudFormation template, use the AWS::IAM::InstanceProfile resource, referencing an AWS::IAM::Role resource with an AssumeRolePolicyDocument delegating access to ec2.amazonaws.com, with a Policy designed to grant least privilege (in this case, allowing 's3:GetObject' only for the specific S3 asset being downloaded).

Here's what a complete example would look like for your case (using the more concise YAML CloudFormation template syntax):

Description: Install S3 archives on a new EC2 instance with cloudinit Parameters:   ImageId:     Description: Image ID to launch EC2 instances.     Type: AWS::EC2::Image::Id     # us-east-1 amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2     Default: "ami-9be6f38c"   S3Path:     Description: S3 bucket/object key path prefix     Type: String     Default: "MyS3Bucket/path" Resources:   EC2Role:     Type: AWS::IAM::Role     Properties:       AssumeRolePolicyDocument:         Version: 2012-10-17         Statement:         - Effect: Allow           Principal: {Service: [ ec2.amazonaws.com ]}           Action: ["sts:AssumeRole"]       Path: /       Policies:       - PolicyName: EC2Policy         PolicyDocument:           Version: 2012-10-17           Statement:           - Effect: Allow             Action: ['s3:GetObject']             Resource: !Sub 'arn:aws:s3:::${S3Path}/*'   RootInstanceProfile:     Type: AWS::IAM::InstanceProfile     Properties:       Path: /       Roles: [ !Ref EC2Role ]   Instance:     Type: AWS::EC2::Instance     Properties:       ImageId: !Ref ImageId       InstanceType: m3.medium       IamInstanceProfile: !Ref RootInstanceProfile       UserData:         "Fn::Base64":           !Sub |             #!/bin/bash -e             S3_PATH=${S3Path}             aws s3 cp s3://$S3_PATH/automation.tar.gz /tmp             aws s3 cp s3://$S3_PATH/oracle-instance.tar.gz /tmp             aws s3 cp s3://$S3_PATH/DockerTools-4.5.0.0-a83.tar.gz /tmp             cd /tmp             tar -xzvf DockerTools-4.5.0.0-a83.tar.gz             cd 4.5.0.0-a83             chmod u+x *             ./dockerTools.sh installFull 

Note:

  • UserData belongs in the Properties section of the AWS::EC2::Instance resource, not the Metadata section as you have in your example.
  • sudo is not needed in a user-data script, because cloudinit already executes user-data as root.
  • You may also need to install the AWS CLI manually if you're using an AMI other than Amazon Linux (which comes with it pre-installed).

Finally, if you continue to have issues, you can look at the output logs on the instance (ssh into the instance and run cat /var/log/cfn-init.log and cat /var/log/cloud-init-output.log) to find any underlying script error output. (However, in your example above you won't have any relevant output in there to start, because until UserData is properly set in the Properties section your script isn't being executed at all.)

Read More

Wednesday, March 15, 2017

Run shell script from python with permissions

Leave a Comment

I have the most simple script called update.sh

#!/bin/sh cd /home/pi/circulation_of_circuits git pull 

When I call this from the terminal with ./update.sh I get a Already up-to-date or it updates the files like expected.

I also have a python script, inside that scipt is:

subprocess.call(['./update.sh'])

When that calls the same script I get:

Permission denied (publickey). fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists.

(I use SSH).

----------------- update --------------------

Someone else had a look for me:

OK so some progress. When I boot your image I can't run git pull in your repo directory and the bash script also fails. It seems to be because the bitbucket repository is private and needs authentication for pull (the one I was using was public so that's why I had no issues). Presumably git remembers this after you type it in the first time, bash somehow tricks git into thinking it's you typing the command subsequently but running it from python isn't the same.

I'm not a git expert but there must be some way of setting this up so python can provide the authentication.

10 Answers

Answers 1

sounds like you need to give your ssh command a public or private key it can access perhaps:

ssh -i /backup/home/user/.ssh/id_dsa user@unixserver1.nixcraft.com 

-i tells it where to look for the key

Answers 2

I believe this answer will help you: http://serverfault.com/questions/497217/automate-git-pull-stuck-with-keychain?answertab=votes#tab-top

I didn't use ssh-agent and it worked: Change your script to the one that follows and try.

#!/bin/bash cd /home/pi/circulation_of_circuits  ssh-add /home/yourHomefolderName/.ssh/id_rsa ssh-add -l git pull 

This assumes that you have configured correctly your ssh key.

Answers 3

It seems like your version control system, need the authentication for the pull so can build the python with use of pexpect,

import pexpect child = pexpect.spawn('./update.sh') child.expect('Password:') child.sendline('SuperSecretPassword') 

Answers 4

This problem is caused by the git repo authentication failing. You say you are using SSH, and git is complaining about publickey auth failing. Normally you can use git commands on a private repo without inputting a password. All this would imply that git is using ssh, but in the latter case it cannot find the correct private key.

Since the problem only manifests itself when run through another script, it is very likely caused by something messing with the environment variables. Subprocess.call should pass the environment as is, so there are a couple of usual suspects:

  1. sudo.
    • if you are using sudo, it will pass a mostly empty environment to the process
  2. the python script itself
    • if the python script changes its env, those changes will get propagated to the subprocess too.
  3. sh -lor su -
    • these commands set up a login shell, which means their environment gets reset to defaults.

Any of these reasons could hide the environment variables ssh-agent (or some other key management tool) might need to work.

Steps to diagnose and fix:

  1. Isolate the problem.

    • Create a minimal python script that does nothing else than runs subprocess.call(['./update.sh']). Run both update.sh and the new script.
  2. Diagnose the problem and fix accordingly:

    a) If update.sh works, and the new script doesn't, you are probably experiencing some weird corner case of system misconfiguration. Try upgrading your system and python; if the problem persists, it probably requires additional debugging on the affected system itself.

    b) If both update.sh and the new script work, then the problem lies within the outer python script calling the shell script. Look for occurrences of sudo, su -, sh -l, env and os.environ, one of those is the most likely culprit.

    c) If neither the update.sh nor the new script work, your problem is likely to be with ssh client configuration; a typical cause would be that you are using a non-default identity, did not configure it in ~/.ssh/config but used ssh-add instead, and after that, ssh-agent's cache expired. In this case, run ssh-add identityfile for the identity you used to authenticate to that git repo, and try again.

Answers 5

Try using the sh package instead of using the subprocess call. https://pypi.python.org/pypi/sh I tried this snippet and it worked for me.

#!/usr/local/bin/python  import sh  sh.cd("/Users/siyer/workspace/scripts") print sh.git("pull") 

Output:

Already up-to-date.

Answers 6

With Git 1.7.9 or later, you can just use one of the following credential helpers:

With a timeout

git config --global credential.helper cache 

... which tells Git to keep your password cached in memory for (by default) 15 minutes. You can set a longer timeout with:

git config --global credential.helper "cache --timeout=3600" 

(That example was suggested in the GitHub help page for Linux.) You can also store your credentials permanently if so desired.

Saving indefinitely

You can use the git-credential-store via

git config credential.helper store 

GitHub's help also suggests that if you're on Mac OS X and used Homebrew to install Git, you can use the native Mac OS X keystore with:

git config --global credential.helper osxkeychain 

For Windows, there is a helper called Git Credential Manager for Windows or wincred in msysgit.

git config --global credential.helper wincred # obsolete 

With Git for Windows 2.7.3+ (March 2016):

git config --global credential.helper manager 

For Linux, you can use gnome-keyring(or other keyring implementation such as KWallet).

Finally, after executing one of the suggested command one time manually, you can execute your script without changes in it.

Answers 7

import subprocess   subprocess.call("sh update.sh", shell=True) 

Answers 8

I can reproduce your fault. It has nothing to do with permission, it depends how your ssh are installed on your system. To verify it's the same cause i need the diff output.

Save the following to a file log_shell_env.sh,

#!/bin/bash  log="shell_env"$1 echo "create shell_env"$1  echo "shell_env" > $log  echo "whoami="$(whoami) >> $log echo "which git="$(which git) >> $log echo "git status="$(git status 2>&1) >> $log echo "git pull="$(git pull 2>&1) >> $log echo "ssh -vT git@github.com="$(ssh -T git@github.com 2>&1) >> $log  echo "ssh -V="$(ssh -V 2>&1) >> $log echo "ls -al ~/.ssh="$(ls -a ~/.ssh) >> $log  echo "which ssh-askpass="$(which ssh-askpass) >> $log echo "ps -e | grep [s]sh-agent="$(ps -e | grep [s]sh-agent ) >> $log echo "ssh-add -l="$(ssh-add -l) >> $log  echo "set=" >> $log set  >> $log 

set execute permission and run it twice:
1. From the console without parameter
2. From your python script with parameter '.python'
Please, run it realy from the same python script!

   For instance:     try:         output= subprocess.check_output(['./log_shell_env.sh', '.python'], stderr=subprocess.STDOUT)         print(output.decode('utf-8'))      except subprocess.CalledProcessError as cpe:         print('[ERROR] check_output: %s' % cpe) 

Do a diff shell_env shell_env.python > shell_env.diff The resulting shell_env.diff should show not more than the following diffs:

15,16c15,16   < BASH_ARGC=() < BASH_ARGV=() --- > BASH_ARGC=([0]="1") > BASH_ARGV=([0]=".python") 48c48 < PPID=2209 --- > PPID=2220 72c72 < log=shell_env --- > log=shell_env.python 

Come back and comment, if you get more diffs update your Question with the diff output.

Answers 9

Use the following python code. This will import the os module in python and make a system call with sudo permissions.

#!/bin/python import os  os.system("sudo ./update.sh") 

Answers 10

Ok, so as a repetition of my comment. And since most answers are SSH or Git based.

Have you tried the solution:

cmd=['sudo', '-u', 'yourusername', 'path to your bash executable', '/home/pi/circulation_of_circuits/update.sh']

already?

In your own comments on your original question you mention you run your Python script with root privileges (userID = 0). But as kennytm mentioned in the comments git pull can not be run as root. So, you could either try to:

  1. Not run your python script at root
  2. Run as root, but change the userID before executing the subprocess. But once you've done that, the script can't return to root again without re-entering your sudo password.
  3. Run as root, but execute the python file with your local user's privileges. Which is what the above-mentioned solution tries to do.

Final point of attention: in your question you state you are running subprocess.call(), but in your own comments you state to be running subprocess.Popen(). I'd recommend the latter. The differences are explained here.

Read More

Sunday, April 3, 2016

iOS 9.x settings blank after 1st launch of an app under development

Leave a Comment

I am having a consistent problem with iOS Settings not showing up after the first launch of a new app. Existing apps appear to be OK.

In the build phase, I have a script which used for some modification but that script has been executed as pre-action.

I compile/build/run in Xcode to Simulator or to iPhone with Xcode 7.2.1.

The first time I run it the settings appears. Run the app again the same way and I get blank sheet where Settings for this app used to be.

I see the Settings for this app just the one time. I can switch back and forth and the Settings remain visible, the blank sheet only appears on 2nd and subsequent launches of the app from Xcode.

The Navigation bar is visible throughout but in the blank case there is absolutely nothing below it excluding system option.

UPDATE

I have following code which copies the required Root.plist into the main Setting bundle.

if [ "${CONFIGURATION}" = "Release" ] then cp "${SRCROOT}/Product_Settings.bundle/Root.plist" "${SRCROOT}/Settings.bundle/Root.plist" elif [ "${CONFIGURATION}" = "Debug" ] then cp "${SRCROOT}/Develop_Settings.bundle/Root.plist" "${SRCROOT}/Settings.bundle/Root.plist" fi 

Develop_Settings : Development Setting bundle
Product_Settings : Production setting bundle

1 Answers

Answers 1

I spend a lot of time for investigation and finally realized that its bug with new Xcode 7.x . You need to kill setting application from background in order to see the updated setting after the second launch of the application.

Read More

Sunday, March 6, 2016

Torch install without curl

Leave a Comment

I need to install Torch on server,

but its installation guide requires the following line as the first step:

curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash 

curl is not installed on my server, and I'm not the root, so I don't have permission to install it.

I downloaded the script file in the URL, and chmod-ed it,

but can't execute it, again due to permission.

Is there a way to install torch without usage of curl?

bash install-deps  

resulted in

xxxx is not in the sudoers file. This incident will be reported. 

1 Answers

Answers 1

Torch dependencies needs to be installed - if you install it via this script, that'll require root privileges.

If you saved the script, you can start it (without chmod) with

bash install-deps 
Read More