Friday, September 30, 2016

How can I set up a d3 chart inside a custom directive in Angular?

Leave a Comment

I'v been following a few walkthroughs on how to implement d3 charts in an angular application. Basically, Im trying to implement the following d3 chart into my custom angular directive ('workHistory'). For the purpose of this question, I'm following a simple bar chart example where I have it set up like so :

index.html

<!doctype html>     <html lang="en" ng-app="webApp">     <head>         <meta charset="utf-8">          <title>My Portfolio</title>          <!--Stylesheets -->         <link rel="stylesheet" href="styles/main.css"/>         <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"/>         <!--Libraries -->         <script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.8/angular.min.js"></script>         <script src="bower_components/jquery/dist/jquery.min.js"></script>         <script src="bower_components/angular-route/angular-route.min.js"></script>         <script src="bower_components/angular-loader/angular-loader.min.js"></script>         <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script>         <script src="bower_components/d3/d3.min.js"></script>         <!--Module -->         <script src="scripts/modules/module.js"></script>         <script src="scripts/modules/d3.module.js"></script>         <!--Controllers -->         <script src="scripts/controllers/mainHeroController.js"></script>         <script src="scripts/controllers/workHistoryController.js"></script>         <!--Directives-->         <script src="scripts/directives/mainHero.directive.js"></script>         <script src="scripts/directives/mainNavbar.directive.js"></script>         <script src="scripts/directives/workHistory.directive.js"></script>     </head>       <!--Main Landing Page-->         <body ng-app="webApp">         <div id="container1">              <work-history chart-data="myData"></work-history>         </div>         <div id="container2">             Container 2         </div>     </body>     </html> 

workHistory.directive.js

(function() {     'use strict';      angular         .module('webApp')         .directive('workHistory', workHistory);      function workHistory()     {         var directive =              {                 restrict: 'EA',                 controller: 'WorkHistoryController',                 //controllerAs: 'workhistory',                 scope: {data: '=chartData'},                 template: "<svg width='850' height='200'></svg>",                 link: workHistoryLink,             };          return directive;     }      function workHistoryLink(scope, element/*, attrs, ctrl, tfn*/)     {         var chart = d3.select(element[0]);         chart.append("div").attr("class", "chart")          .selectAll('div')          .data(scope.data).enter().append("div")          .transition().ease("elastic")          .style("width", function(d) { return d + "%"; })          .text(function(d) { return d + "%"; });          }  })(); 

main.css

.axis path, .axis line{   fill: none;   stroke:black;   shape-rendering:crispEdge; }  .axis text{   font-family: sans-serif;   font-size: 10px; }  h1{   font-family: sans-serif;   font-weight: bold;   font-size: 16px; } .tick {   stroke-dasharray: 1, 2; } 

The Problem: With this code, nothing displays. I get the following error:

angular.js:13920TypeError: chart.append(...).attr(...).selectAll(...).data(...).enter is not a function

Can someone help me understand how to properly set this up? (Bonus, if someone can explain how I can get the collapsible tree d3 chart configured into a custom directive.

Thanks.

2 Answers

Answers 1

There are couple of errors in your code:

  1. you template is not good: you make an svg, but you insert standard html tags inside. It can't possibly work. In this case you should make a template with a div tag as a root node.
  2. if you want to provide an initial template like you do, you should then make it as the root node by setting the replace property of your directive to true.
  3. then, if you want added divs by d3 visible, you should make them visible by setting a background or a border with a ".style('background', 'blue')" for example
  4. I don't understand what you can possibly do in the controller 'WorkHistoryController'. According to me, in such a directive, you should avoid provide both link and controller.
  5. finally, if you want the whole think to always work, you should put the D3 code into a $watch so that you are sure that the generation is triggered once the data is set on the scope.
  6. as a complement, now you graph is always observing the array, it should handle the removal of elements that are not necessary if the array is smaller than before: ".exit().remove();"

The directive should look something like this:

angular     .module('app', [])     .directive('workHistory', workHistory);  function workHistory() {     var directive =          {           restrict: 'EA',           scope: {data: '=chartData'},           replace:true,           template: "<div style='width:100%'></div>",           link: workHistoryLink,         };      return directive; }  function workHistoryLink(scope, element) {   scope.$watch('data', function(){     var chart = d3.select(element[0]);     chart.append("div").attr("class", "chart")       .selectAll('div')       .data(scope.data).enter().append("div")       .style('background', 'blue')       .transition().ease("elastic")       .style("width", function(d) { return d + "%"; })       .text(function(d) { return d + "%"; })       .exit().remove();   }, true); }  

here is a jsbin to make my point: http://jsbin.com/vegesigevi/edit?html,js,output

Hope this helps

Answers 2

Here is another way to use D3js by using nvd3 directive

https://github.com/novus/nvd3

<nvd3 options="options" data="data"></nvd3> 

in the controller you set options and data

visite this site you find exemples http://nvd3.org/examples/index.html

Read More

Thursday, September 29, 2016

Any way to run multiple Google App Engine local instances for appengineFunctionalTest?

Leave a Comment

Background

From the docs, at https://github.com/GoogleCloudPlatform/gradle-appengine-plugin

I see that by putting my functionalTests in /src/functionalTests/java does the following:

  1. Starts the Local GAE instance
  2. runs tests in the functionalTests directory
  3. Stops the Local instance after the tests are complete

My Issue

For my microservices, I need to have 2 local servers for running my tests. 1 server is responsible for a lot of auth operations, and the other microservices talk to this server for some verification operations.

I've tried

appengineFunctionalTest.dependsOn ':authservice:appengineRun' 

this does start the dependent server, but then it hangs and the tests don't continue. I see that I can set deamon = true and start the server on a background thread, but I can only seem to do that in isolation.

Is there a way to have a 'dependsOn' also be able to pass parameters to the dependent task? I haven't found a way to make that happen.

Or perhaps there is another way to accomplish this.

Any help appreciated

0 Answers

Read More

Cannot install psget

Leave a Comment

I am trying to install psget on windows 10 from powershell in admin mode but I get:

PS C:\Windows\system32> (new-object Net.WebClient).DownloadString("http://psget.net/GetPsGet.ps1") | iex Downloading PsGet from https://github.com/psget/psget/raw/master/PsGet/PsGet.psm1 Invoke-WebRequest : The given path's format is not supported. At line:42 char:13 +             Invoke-WebRequest -Uri $Url -OutFile $SaveToLocation +             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : NotImplemented: (:) [Invoke-WebRequest], NotSupportedException     + FullyQualifiedErrorId : WebCmdletIEDomNotSupportedException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand  Import-Module : The specified module 'C:\Users\myuser\Documents\WindowsPowerShell\Modules C:\Users\myuser\Documents\WindowsPowerShell\Modules\PsGet' was not loaded because no valid module file was found in any module directory. At line:105 char:9 +         Import-Module -Name $Destination\PsGet +         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~     + CategoryInfo          : ResourceUnavailable: (C:\Users\myuser\Do...l\Modules\PsGet:String) [Import-Module], FileNotFoundException     + FullyQualifiedErrorId : Modules_ModuleNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand  PsGet is installed and ready to use USAGE:     PS> import-module PsGet     PS> install-module PsUrl  For more details:     get-help install-module Or visit http://psget.net PS C:\Windows\system32> 

As suggested below PsGet is actually already installed on windows 10. I have then continued with the next step:

enter image description here

and as can be seen it installs successfully (needs to be done running as administrator). After a restart of the powershell console I still don't get any color highlighting though:

enter image description here

Any ideas?

Btw: the folder C:\Users[my-user]\Documents\WindowsPowerShell\Modules is empty:

enter image description here

2 Answers

Answers 1

Looks like the script at http://psget.net/GetPsGet.ps1 tries to decide where to install by querying for @($env:PSModulePath -split ';') and then limit the search for paths under Documents\WindowsPowerShell\Modules.

It appears that in your computer, PSModulePath includes twice the folder C:\Users\myuser\Documents\WindowsPowerShell\Modules, which causes an issue with the installation script.

You can do either one of these two options to solve it:

  1. Remove one instance of C:\Users\myuser\Documents\WindowsPowerShell\Modules from the PSModulePath variable.
  2. Install PsGet manually using the instructions in the official website.

Answers 2

Um... psget as in PowerShellGet module that I am almost certain comes on Win 10. I believe your error is even telling you that. Where it says PsGet is installed and ready to use.

Read More

Wednesday, September 28, 2016

Javascript - Parsing JSON returns syntax error

Leave a Comment

I am getting an error when trying to parse JSON:

 SyntaxError: Unexpected token u in JSON at position 0(…)   eFormsAtoZIndex.aspx:6558 

Full Code: http://pastebin.com/LXpJN8GF

Relevant Code:

$(document).ready(function() {     var rebuild = getParameterByName("rebuild");     var createdStructures = $('#AtoZContentDiv').children().length;     if ((rebuild !== undefined && rebuild !== null && rebuild.indexOf("true") === 0) || (createdStructures === 0)) {         // clean up pre-existing data         cleanUp();          // create container structure         createFormLinkContainers();          // Call SP web services to retrieve the information and create the A to Z         retrieveListData();         completeInitialization();     } else {         try {             aggregateAll = jQuery.parseJSON($('#hdnAggregateAll').val());             console.log(jQuery.parseJSON($('#hdnAggregateAll').val()));             aggregatePersonal = jQuery.parseJSON($('#hdnAggregatePersonal').val());             aggregateBusiness = jQuery.parseJSON($('#hdnAggregateBusiness').val());             ministryAggregate = jQuery.parseJSON($('#hdnMinistryAggregate').val());             caAggregate = jQuery.parseJSON($('#hdnCAAggregate').val());             sTaxAggregate = jQuery.parseJSON($('#hdnSTaxAggregate').val());             bTaxAggregate = jQuery.parseJSON($('#hdnBTaxAggregate').val());             leTaxAggregate = jQuery.parseJSON($('#hdnLETaxAggregate').val());         } catch (err) {             console.log(err);         }          var type = getParameterByName("filter");     }     $("#tab-all").click(function() {         loadit('all');     });      $("#tab-business").click(function() {         loadit('business');     });      $(document).on('click', '#tab-personal', function(e) {         loadit('personal');     });      buildFilterMenu();     loadit('all');  });  function createJSONStructure(title, desc, index, type, formLink, documentLink, pubType, processId, ministry, ca, stax, btax, letax) {     if (desc !== undefined && desc !== null) {         desc = desc.replace(/&lt;/g, "<").replace(/&gt;/g, ">");     } else {         desc = "";     }     var typeArr = [];     type = type.replace(/&amp;/, "&");      var tempType = type.split("&");      for (i = 0; i < tempType.length; i++) {         typeArr.push(tempType[i].trim());     }      if (formLink === undefined || formLink === null || formLink.length === 0) {         formLink = "";     }      if (documentLink === undefined || documentLink === null || documentLink.length === 0) {         documentLink = "";     }      // subject, business and life event taxonomies must cater for multiple entries     var staxStructure = buildTaxonomyJSONStructure(stax, "stax");     var btaxStructure = buildTaxonomyJSONStructure(btax, "btax");     var letaxStructure = buildTaxonomyJSONStructure(letax, "letax");      var json = {         'name': title,         'desc': desc,         'type': typeArr,         'pubType': pubType,         'pdflink': documentLink.split(",")[0].replace(/\'/g, "&#39;"),         'formlink': formLink.split(",")[0].replace(/\'/g, "&#39;"),         'processid': processId,         'index': index,         'ministry': ministry.replace(/\,/g, " "),         'ca': ca.replace(/\,/g, " "),         'stax': staxStructure,         'btax': btaxStructure,         'letax': letaxStructure     };     return json; }    function completeInitialization() {     if (checkDataLoaded()) {         // add the Navigation to the containers once all the data is inserted         addNavigationToContainers();           var type = getParameterByName("filter");         if (type == null || type.length == 0) {             type = "all";         }          loadit(type);          buildFilterMenu();          filter(type);          $('#hdnAggregateAll').val(stringify(aggregateAll));         console.log(aggregateAll);         $('#hdnAggregatePersonal').val(stringify(aggregatePersonal));         $('#hdnAggregateBusiness').val(stringify(aggregateBusiness));         $('#hdnMinistryAggregate').val(stringify(ministryAggregate));         $('#hdnCAAggregate').val(stringify(caAggregate));         $('#hdnSTaxAggregate').val(stringify(sTaxAggregate));         $('#hdnBTaxAggregate').val(stringify(bTaxAggregate));         $('#hdnLETaxAggregate').val(stringify(leTaxAggregate));     } else {         retryCount += 1;          // Check that the maximum retries have not been exceeded         if (retryCount <= maxRetries) {             setTimeout("completeInitialization();", 1000 * retryCount);         }     } } 

Can anyone point out what is wrong with the JSON structure or JS or how I can debug items within it?

EDIT (As per Jaromanda X's and CH Buckingham reply):

$('#hdnAggregateAll').val(JSON.stringify(aggregateAll)); console.log(aggregateAll);          $('#hdnAggregatePersonal').val(JSON.stringify(aggregatePersonal));          $('#hdnAggregateBusiness').val(JSON.stringify(aggregateBusiness));          $('#hdnMinistryAggregate').val(JSON.stringify(ministryAggregate)); $('#hdnCAAggregate').val(JSON.stringify(caAggregate)); $('#hdnSTaxAggregate').val(JSON.stringify(sTaxAggregate)); $('#hdnBTaxAggregate').val(JSON.stringify(bTaxAggregate)); $('#hdnLETaxAggregate').val(JSON.stringify(leTaxAggregate)); 

ERROR:

10:42:24.274 TypeError: item is undefined createFormLinks/<()eformsAtoZIndex.aspx:5644 .each()jquery-1.11.1.min.js:2 createFormLinks()eformsAtoZIndex.aspx:5638 processResult()eformsAtoZIndex.aspx:5507 m.Callbacks/j()jquery-1.11.1.min.js:2 m.Callbacks/k.fireWith()jquery-1.11.1.min.js:2 x()jquery-1.11.1.min.js:4 .send/b()jquery-1.11.1.min.js:4 1eformsAtoZIndex.aspx:5644:1 

On line:

if (item.processid !== "0") 

In Block:

function createFormLinks(formItems, index)     {         // create all links on the page and add them to the AtoZContent div for now         var parentContainer = $("#AtoZContentDiv");          if (parentContainer === null)         {             // if it doesn't exist, we exist cause I can't reliably add a new control to the body and get the display              // location correct             return;         }          // sort form link array first         formItems = sortResults(formItems, 'name', true);          var count = 0;          $.each(formItems, function(i, item)         {                var link;             count = count + 1;              //add links to parent container             if (item.processid !== "0")             {                  link = item.formlink;             }             else if (item.pdflink !== "")             {                  link = item.pdflink;             }              var container = $("#AtoZContent-" + index);             var itemType = "all";              if (item.type !== null && item.type !== undefined && item.type.length === 1) itemType = item.type[0];                var str = "<div id='divFormLink-" + index + "-" + count + "' type='" + itemType + "' ";          if (item.name !== undefined && item.name !== null)             {                 str = str + " ministry='" + stripPunctuation(item.ministry) + "' ";                 str = str + " ca='" + stripPunctuation(item.ca) + "' ";                  // now, we need to handle these differently since they can have multiple values                 str = str + " stax='";                 for (i = 0; i < item.stax.length; i++)                 {                     str = str + stripPunctuation(item.stax[i]);                 }                 str = str + "' ";                  str = str + " btax='";                 for(i = 0; i < item.btax.length; i++)                 {                     str = str + stripPunctuation(item.btax[i]);                 }                 str = str + "' ";                  str = str + " letax='";                 for(i = 0; i < item.letax.length; i++)                 {                     str = str + stripPunctuation(item.letax[i]);                 }                 str = str + "' ";             }              str = str + " index='" + index + "' style='word-wrap: break-word;'></div>";         container.append(str);              var innerDiv = $("#divFormLink-" + index + "-" + count);             appendIcon(innerDiv, item.pubType);             innerDiv.append("<a id='formLink-" + index + "-" + count + "' href='" + link + "'>" + item.name + "</a>");             innerDiv.append("<div id='formDesc-" + index + "-" + count + "'>" + item.desc + "</div><br />");          });     } 

1 Answers

Answers 1

On the line 155 you push json even if it's undefined.

if (pubType=="eForm" || pubType=="PDF") {       var json = createJSONStructure(title, desc, index, type.toLowerCase(), formLink, documentLink, pubType, processId, ministry, ca, stax, btax, letax); } formItems.push(json); 

And after it your are trying to get item.processid of undefined. You can define variables in if-block, but in the case you should to add some validation.

$.each(formItems, function(i, item) {        var link;     count = count + 1;      if (item == null) {         return;     }      //add links to parent container     if (item.processid !== "0")     ... }); 
Read More

Show multilayer function relationship by cscope in vim

Leave a Comment

I know source insight can show multilayer function relationship in one window.

For example, we have four functions as below

void example_A() {     example_B(); }  void example_B() {     example_C(); }  void example_C() {     example_D(); }  void example_D(); {     return 5; } 

When I click example_D() in source insight, source insight show example_C() is calling the function.

Moreover, when I click example_C(), I see example_B() is calling the function.

The relationship is like this:

Example_D()    |    -->Example_C()          |          -->Example_B()                |                -->Example_A() 

Could I see the relationship in one window by using cscope in vim?

Thank you.

1 Answers

Answers 1

CCTRee plugin for vim does this kind of visualization using cscope

https://sites.google.com/site/vimcctree/

http://www.vim.org/scripts/script.php?script_id=2368

https://github.com/hari-rangarajan/CCTree

Read More

Logstash worker dies with no reason

Leave a Comment

Using logstash 2.3.4-1 on centos 7 with kafka-input plugin I sometimes get

{:timestamp=>"2016-09-07T13:41:46.437000+0000", :message=>#0, :events_consumed=>822, :worker_count=>1, :inflight_count=>0, :worker_states=>[{:status=>"dead", :alive=>false, :index=>0, :inflight_count=>0}], :output_info=>[{:type=>"http", :config=>{"http_method"=>"post", "url"=>"${APP_URL}/", "headers"=>["AUTHORIZATION", "Basic ${CREDS}"], "ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"", headers=>{..}, codec=>"UTF-8">, workers=>1, request_timeout=>60, socket_timeout=>10, connect_timeout=>10, follow_redirects=>true, pool_max=>50, pool_max_per_route=>25, keepalive=>true, automatic_retries=>1, retry_non_idempotent=>false, validate_after_inactivity=>200, ssl_certificate_validation=>true, keystore_type=>"JKS", truststore_type=>"JKS", cookies=>true, verify_ssl=>true, format=>"json">]>, :busy_workers=>1}, {:type=>"stdout", :config=>{"ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"\n">, workers=>1>]>, :busy_workers=>0}], :thread_info=>[], :stalling_threads_info=>[]}>, :level=>:warn}

this is the config

        input {       kafka {         bootstrap_servers => "${KAFKA_ADDRESS}"         topics => ["${LOGSTASH_KAFKA_TOPIC}"]       }     }      filter {       ruby {         code =>       "require 'json'        require 'base64'         def good_event?(event_metadata)          event_metadata['key1']['key2'].start_with?('good')        rescue          true         end         def has_url?(event_data)          event_data['line'] && event_data['line'].any? { |i| i['url'] && !i['url'].blank? }        rescue          false        end         event_payload = JSON.parse(event.to_hash['message'])['payload']         event.cancel unless good_event?(event_payload['event_metadata'])        event.cancel unless has_url?(event_payload['event_data'])     "   }     }      output {       http {           http_method => 'post'           url => '${APP_URL}/'           headers => ["AUTHORIZATION", "Basic ${CREDS}"]       }        stdout { }     } 

Which is odd, since it is written to logstash.log and not logstash.err

What does this error mean and how can I avoid it? (only restarting logstash solves it, until the next time it happens)

1 Answers

Answers 1

According to this github issue your ruby code could be causing the issue. Basically any ruby exception will cause the filter worker to die. Without seeing your ruby code, it's impossible to debug further, but you could try wrapping your ruby code in an exception handler and logging the exception somewhere (at least until logstash is updated to log it).

Read More

Activity log of database MS SQL Server

Leave a Comment

I have a database more than hundred tables. I am continuously adding columns to existing tables (if required) and also added few new tables.

Now I want to check what changes I have made in last 3 months. Is there any activity log in MS SQL Server 2012 for that specific database to track changes.

6 Answers

Answers 1

Right now,your options are limited ,going forward you can try below and also check to see if they help you now..

1.If you have enabled Audit,you can track the changes

2.Default trace also captures tables created ,but this uses, roll over files mechanism to override last files when the space is full,so you may be out of luck(since you are asking for three months range),but give it a try..

3.Finally One final option is to query Tlog

select * from fn_dblog(null,null) where [transaction name]='CREATE TABLE' 

the above Tlog option works only if you have Tlog backups for over three months and also you need to restore them

Answers 2

To Check all the activities in past time, you can work with MSSQL Audit. Its the best way to track any changes at any time. Please Check https://msdn.microsoft.com/en-us/library/cc280386.aspx

Answers 3

Perhaps this can get you partway. sys.objects has create and modify dates but unfortunately sys.columns does not. However the latest columns added will have higher column_ids. I don't know that you would be able to pick out deleted columns that easily. Note that changes other than column changes can be reflected by the modify date.

select  s.name [schema], o.name [table], o.modify_date [table_modify_date], c.column_id, c.name from    sys.schemas s join    sys.objects o on o.schema_id = s.schema_id left    join sys.columns c on c.object_id = o.object_id where   o.type = 'U'    --user tables only and     o.modify_date >= dateadd(M,-3, getdate()) order   by s.name, o.name, column_id; 

To make this audit easier in the future you can create a DDL trigger that will log all schema changes to a table or in source control if you use something like a SSDT data project to manage your changes.

Answers 4

You could use a DDL Trigger:

CREATE TRIGGER ColumnChanges   ON DATABASE    FOR ALTER_TABLE   AS   DECLARE @data XML   SET @data = EVENTDATA()   INSERT alter_table_log       (PostTime, DB_User, Event, TSQL)       VALUES       (GETDATE(),       CONVERT(nvarchar(100), CURRENT_USER),       @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),       @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)') ) ;   GO   

Answers 5

Take snapshots of the metadata definitions via the "Generate Scripts..." Tasks option from the SQL Server Management Studio.

enter image description here

Store the generated script files in a folder whose name references the current date. Once this has been done more than once, WinDiff can be used to highlight the database changes made between any two snapshots. Choose the "Generate Scripts" options carefully and consistently so that time based comparisons are more beneficial.

Answers 6

You could run a report from the right click menu on the DB:

enter image description here

There are several reports that might interest you in this drop down. Or you could possibly create a custom report with just the information that you need.

My Schema report only goes back to 9/3/2016, but I have 1000+ tables with 60+ columns with many updates daily. Yours might go back further.

Read More

Tuesday, September 27, 2016

What is untrackOutstandingTimeouts setting for in Protractor?

Leave a Comment

In the Protractor reference configuration, there is the untrackOutstandingTimeouts setting mentioned:

// Protractor will track outstanding $timeouts by default, and report them in  // the error message if Protractor fails to synchronize with Angular in time.  // In order to do this Protractor needs to decorate $timeout.  // CAUTION: If your app decorates $timeout, you must turn on this flag. This  // is false by default. untrackOutstandingTimeouts: false, 

I've never seen anyone changing the setting. What is the practical usage of the setting? When should I set it to true?

3 Answers

Answers 1

The outstanding timeouts are tracked so that the Protractor errors can report them. You won't get timeout information in your errors if you turn this off.

You might need to turn it off, however, if you decorate your $timeout object (for whatever reason you need to decorate it for), since Protractor also decorates the same object and you won't see your changes to it whenever you need them.

This was added here, by user request.

Answers 2

untrackOutstandingTimeouts:true is for $timeout, maybe also for $interval (there i am not sure)

Simulate the passage of time in protractor?

Answers 3

Here's the official FAQ for the question. It's in the same line as @Vlad's answer.

Read More

Mocking method calls using power mockito - org.powermock.api.mockito.ClassNotPreparedException

Leave a Comment

I have an image loader class and i need to test some static methods in it. Since Mockito does not support static methods i switched to Power Mockito. But the static method i am testing has a method call

 Base64.encodeToString(byteArray, Base64.DEFAULT); 

To mock this i am using mockStatic method as below with @PrepareForTest annotation.

 PowerMockito.mockStatic(Base64.class); 

But Android studio is returning me still returning me an error as below.

org.powermock.api.mockito.ClassNotPreparedException: The class android.util.Base64 not prepared for test. To prepare this class, add class to the '@PrepareForTest' annotation.

Below is my complete code.

Code to be tested:

import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.util.Base64; import android.widget.ImageView;    public static String convertBitmapToBase64(Bitmap imageBitmap, boolean withCompression) {     ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();     imageBitmap.compress(Bitmap.CompressFormat.PNG, 120, byteArrayOutputStream);     byte[] byteArray = byteArrayOutputStream.toByteArray();     return Base64.encodeToString(byteArray, Base64.DEFAULT); } 

Test class code

import android.graphics.Bitmap; import android.util.Base64; import org.junit.Before; import org.junit.runner.RunWith; import org.mockito.MockitoAnnotations; import org.powermock.api.mockito.PowerMockito; import org.powermock.core.classloader.annotations.PrepareForTest; import org.powermock.modules.junit4.PowerMockRunner; import org.testng.annotations.Test;  @RunWith(PowerMockRunner.class) @PrepareForTest({Base64.class}) public class ImageLoaderTest  { @Test    public void testConvertBitmap(){     byte[] array = new byte[20];     PowerMockito.mockStatic(Base64.class);     PowerMockito.when(Base64.encodeToString(array, Base64.DEFAULT)).thenReturn("asdfghjkl");     Bitmap mockedBitmap= PowerMockito.mock(Bitmap.class);     String output = ImageLoaderUtils.convertBitmapToBase64(mockedBitmap);     assert (!output.isEmpty()); } 

}

Gradle dependencies

testCompile 'junit:junit:4.12' testCompile 'org.powermock:powermock:1.6.5' testCompile 'org.powermock:powermock-module-junit4:1.6.5' testCompile 'org.powermock:powermock-api-mockito:1.6.5' 

2 Answers

Answers 1

Short answer you can't. Here from the FAQ:

What are the limitations of Mockito

  • Cannot mock final classes
  • Cannot mock static methods
  • Cannot mock final methods - their real behavior is executed without any exception. Mockito cannot warn you about mocking final methods so be vigilant.

Further information about this limitation:

Can I mock static methods?

No. Mockito prefers object orientation and dependency injection over static, procedural code that is hard to understand & change. If you deal with scary legacy code you can use JMockit or Powermock to mock static methods.

If you want to use PowerMock try like this:

@RunWith(PowerMockRunner.class) @PrepareForTest( { Base64.class }) public class YourTestCase {     @Test     public void testStatic() {         mockStatic(Base64.class);         when(Base64.encodeToString(argument)).thenReturn("expected result");     } } 

More on powermock test sample

Answers 2

Really trivially, I think you're calling prepareForTest incorrectly. It should be prepareForTest(Base64.class) not prepareForTest({Base64.class}) (note the squiggly braces in your code). Given the error you're getting, I think that's relevant.

Read More

How to allow users to define arbitrary number of fields in Rails

Leave a Comment

I'm building a CMS where Administrators must be able to define an arbitrary number fields of different types (text, checkboxes, etc). Users then can fill those fields and generate posts.

How can this be done in Rails? I guess that the persistence of these "virtual" attributes will be done in a serialised attribute in the database. But I am not sure how to struct the views and controllers, or where the fields should be defined.

2 Answers

Answers 1

What you describe is called Entity-Attribute-Value model. MattW. provided 2 possible solutions, but you can also use one of these gems that implement this data pattern instead of handling this yourself:

I haven't use any of these gems before, so I can't suggest which one is best.

RefineryCMS has this extension for the feature you need. You might want to take a look for ideas.

There is also a similar older question here on Stackoverflow.

Answers 2

As soon as it's user defined, it is never a column (field), it's always a row (entity) in the database – users don't get to define your data structure.

Here are two ideas. (1) is the more idiomatic "Rails Way". (2) is somewhat less complicated, but may tie you to your specific DBMS.

(1) Using four models: Posts (n:1) PostTypes (1:n) PostFields (1:n) PostValues (n:1 Posts):

create_table :post_types do |t|   t.string :name   t.text :description   ....  end  create_table :post_fields do |t|   t.references :post_type   t.string :name   t.integer :type   t.boolean :required   .... end  create_table :posts do |t|   t.references :post_type   (... common fields for all posts, like user or timestamp)  end  create_table :post_values do |t|   t.references :post   t.references :post_field   t.string :value   .... end 

Only problem is that you're limited to a single type for values in the database, You could do a polymorphic association and create different models for :boolean_post_values, :float_post_values etc.

  1. One other solution may be to use a json-based data structure, i. e. the same as above, but instead of PostValues to just save it in one field of Posts:

    create_table :posts do |t| t.references :post_type t.json :data ... end // ... no table :post_values

This is probably easier, but json is a postgres-specific datatype (although you could use string and do the de/encoding yourself).

Read More

Reading SQL Varbinary Blob from Database

Leave a Comment

I am working on saving files to sql blob to a varbinary(max) column, and have got the save side of things working now (I believe).

What I can't figure out is how to read the data out, given that I'm retrieving my DB values using a stored procedure I should be able to access the column data like ds.Tables[0].Rows[0]["blobData"]; so is it necessary that I have an SQLCommand etc like I've seen in examples such as the one below:

private void OpenFile(string selectedValue) {     String connStr = "...connStr";     fileName = ddlFiles.GetItemText(ddlFiles.SelectedItem);      using (SqlConnection conn = new SqlConnection(connStr))     {         conn.Open();         using (SqlCommand cmd = conn.CreateCommand())         {             cmd.CommandText = "SELECT BLOBData FROM BLOBTest WHERE testid = " + selectedValue;              using (SqlDataReader dr = cmd.ExecuteReader())             {                 while (dr.Read())                 {                     int size = 1024 * 1024;                     byte[] buffer = new byte[size];                     int readBytes = 0;                     int index = 0;                      using (FileStream fs = new FileStream(fileName, FileMode.Create, FileAccess.Write, FileShare.None))                     {                         while ((readBytes = (int)dr.GetBytes(0, index, buffer, 0, size)) > 0)                         {                             fs.Write(buffer, 0, readBytes);                             index += readBytes;                         }                     }                 }             }         }     } 

Is there a simpler way to do this when I can access the column that I need without the sqlcommand?

Hope I was clear enough in my question, if not then ask and I will elaborate!

UPDATE:

The situation is now this- I have the value of the blobData column returned by my stored procedure, and can pass this into a memory stream and call 'LoadDocument(memStream); however this results in jibberish text instead of my actual file displaying.

My question now is is there a way to get the full path including file extension of a file stored in an SQL Blob? I am currently looking into using a Filetable for this in the hopes that I will be able to get the full path.

UPDATE 2:

I tried creating a temp file and reading this to no avail (still gibberish)

                string fileName = System.IO.Path.GetTempFileName().ToString().Replace(".tmp", fileExt);              using (MemoryStream myMemoryStream = new MemoryStream(blobData, 0, (int)blobData.Length, false, true))             {                 using (FileStream myFileStream1 = File.Create(fileName))                 {                     myMemoryStream.WriteTo(myFileStream1);                      myMemoryStream.Flush();                     myMemoryStream.Close();                      myFileStream1.Flush();                     myFileStream1.Close();                      FileInfo fi = new FileInfo(fileName);                      Process prc = new Process();                     prc.StartInfo.FileName = fi.FullName;                     prc.Start();                 }             } 

Cheers, H

3 Answers

Answers 1

You are making it more difficult than it needs to be. This is using MySQL just because it is handy - the providers all work pretty much the same. Some things will need to be tweaked to handle very large data items (more of a server thing than DB Provider).

Saving image

string sql = "INSERT INTO BlobDemo (filename, fileType, fileData) VALUES (@name, @type, @data)"; byte[] imgBytes;  using (MySqlConnection dbCon = new MySqlConnection(MySQLConnStr)) using (MySqlCommand cmd = new MySqlCommand(sql, dbCon)) {       string ext = Path.GetExtension(filename);      dbCon.Open();     cmd.Parameters.Add("@name", MySqlDbType.String).Value = "ziggy";     cmd.Parameters.Add("@data", MySqlDbType.Blob).Value = File.ReadAllBytes(filename);     cmd.Parameters.Add("@tyoe", MySqlDbType.String).Value = ext;     int rows = cmd.ExecuteNonQuery(); } 

The file data is fed directly to the DB Provider

is there a way to get the full path including file extension of a file stored in an SQL Blob?

No. Your code and the code above is saving the bytes which make up an image or any file.

Read Img Data back

This will read the data back, save it to file and start the associated app:

string SQL = "SELECT itemName, itemData, itemtype FROM BlobDemo WHERE Id = @id";  string ext = ""; string tempFile = Path.Combine(@"C:\Temp\Blobs\",      Path.GetFileNameWithoutExtension(Path.GetTempFileName()));   using (MySqlConnection dbCon = new MySqlConnection(MySQLConnStr)) using (MySqlCommand cmd = new MySqlCommand(SQL, dbCon)) {     cmd.Parameters.Add("@id", MySqlDbType.Int32).Value = 14;     dbCon.Open();      using (MySqlDataReader rdr =  cmd.ExecuteReader())     {         if (rdr.Read())         {             ext = rdr.GetString(2);             File.WriteAllBytes(tempFile + ext, (byte[])rdr["itemData"]);         }     }      // OS run test     Process prc = new Process();     prc.StartInfo.FileName = tempFile + ext;     prc.Start(); } 
  • 1 The number of bytes read back matched
  • 1 The associated app launched just fine with the image
  • 1 The image showed in the picturebox

In both cases, File.ReadAllBytes() and File.WriteAllBytes() will do most of the work for you, no matter the file type.

There is no need to scoop out the data 1k at a time. If the blob was something like an image you wished to use in the app:

using (MySqlDataReader rdr = cmd.ExecuteReader()) {     if (rdr.Read())     {         ext = rdr.GetString(2);         using (MemoryStream ms = new MemoryStream((byte[])rdr["imgData"]))         {             picBox.Image = Image.FromStream(ms);         }     } } 

The blob bytes can be fed to the memstream, and even a temp Image need not be created unless you don't need to show it.

In all, Ceiling Cat made it back just fine (image was 1.4 MB, zoomed; another test with a 15.4 MB image also worked - both are larger than I would care to store in a DB).:

enter image description here

Depending on how this is used, consider archiving the images to somewhere on the file system and just saving the filename - perhaps with the Id added to assure the names are unique and help visually link them to the record. Not only will large blobs of data bloat the DB, but there is obviously some overhead involved in converting to and from bytes which can be avoided.


If you want/need to delete these at some point after the associated app is done with them (not really a component of the question), then use a tempfile in a specific directory so you can delete everything in it (conditionally1) when the app ends, or at start up:

private string baseAppPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData),                     "Company Name", "Product Name", "Temp Files"); 

Append a Temp Filename and the actual extension for individual files. Alternatively, you could maintain a List<string> trashCan to store the name of each file you create to be deleted later.

1 Whenever you do delete them, do allow that files could still be open in the app associated with the extension.

Answers 2

You will need to use a SqlCommand to retrieve data when data is stored in a varbinary(MAX) column unless you use a FileTable, which allows access to the contents via UNC path similarly to regular files stored on a file system, but managed by SQL Server.

If the size of the blob may be large, the "chunk" technique you are currently using will reduce memory requirements, but at the expense of more verbose code. For reasonably size blob sizes, you could read the entire column contents at once without a chunking method. Whether this is feasible depends on both the size of the blob and client available memory.

var buffer = (byte[])cmd.ExecuteScalar(); fs.Write(buffer, 0, buffer.Length); 

Answers 3

With the .NET SQL Server provider, you can use a little-known but cool class called SqlBytes. It's been designed specifically to map varbinary fields, but there are not many examples on how to use it.

Here is how you can save to the database with it (you can use a stored procedure or direct SQL like I demonstrate here, we just presume the MyBlobColumn is a varbinary one).

string inputPath = "YourInputFile"; using (var conn = new SqlConnection(YourConnectionString)) {     conn.Open();     using (var cmd = conn.CreateCommand())     {         // note we define a '@blob' parameter in the SQL text         cmd.CommandText = "INSERT INTO MyTable (Id, MyBlobColumn) VALUES (1, @blob)";         using (var inputStream = File.OpenRead(inputPath))         {             // open the file and map it to an SqlBytes instance             // that we use as the parameter value.             var bytes = new SqlBytes(inputStream);             cmd.Parameters.AddWithValue("blob", bytes);              // undercovers, the reader will suck the inputStream out through the SqlBytes parameter             cmd.ExecuteNonQuery();         }     } } 

To read the file out into a stream from the database, here is how you can do it.

string outputPath = "YourOutputFile"; using (var conn = new SqlConnection(YourConnectionString)) {     conn.Open();     using (var cmd = conn.CreateCommand())     {         // this is a regular direct SQL command, but you can use a stored procedure as well         cmd.CommandText = "SELECT MyBlobColumn FROM MyTable WHERE Id = 1";          // note the usage of SequentialAccess to lower memory consumption (read the docs for more)         using (var reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess))         {             if (reader.Read())             {                 // again, we map the result to an SqlBytes instance                 var bytes = reader.GetSqlBytes(0); // column ordinal, here 1st column -> 0                  // I use a file stream, but that could be any stream (asp.net, memory, etc.)                 using (var file = File.OpenWrite(outputPath))                 {                     bytes.Stream.CopyTo(file);                 }             }         }     } } 

With these techniques, we never allocated any byte[] nor MemoryStream instances, just used in and out SQL or File Streams.

Read More

Monday, September 26, 2016

Why are .domain, tickFormat and tickValues not recognised inside dimensions variable? (d3, parallel coordinates)

Leave a Comment

I am creating a parallel coordinates plot using d3.js, but am struggling to format axis labeling as I would like.

For instance, one of my axes, 'Buffer Concentration', is plotted on a log scale, which I've specified through the dimensions variable, like so.

var dimensions = [   ...   {     key: "b.Conc",     description: "Buffer Concentration",     type: types["Number"],     scale: d3.scale.log().domain([.1, 100]).range([innerHeight, 0]),     tickValues: [.1,.2,.4,.6,.8,1,2,4,6,8,10,20,40,60],     tickFormat: d3.format(4,d3.format(",d"))   },   ... ]; 

However, as you can see from the resulting plot below, my attempt to specify which tick labels are shown (through tickValues) and that they be shown as ordinary numbers rather than powers of 10 (through tickFormat) are not working. Additionally, the axis does not span the domain I specified in scale; it should be [0.1, 100], not [0.1, 60].

Why is this?

Section of Parallel Coordinates Plot

Code

The data.csv, index.html and style.css files for my plot can be found here. When opening locally, it [only] works in Firefox.

Thanks in advance for any help, and apologies if I'm missing something basic - I'm new to d3.

1 Answers

Answers 1

It seems that you forgot to apply custom ticks and tick values to generated scales in this line: https://gist.github.com/LThorburn/5f2ce7d9328496b5f4c123affee8672f#file-index-html-L189

Not sure, but smth like this should help.

    if (d.tickValues) {       renderAxis.tickValues(d.tickValues);     }      if (d.tickFormat) {       renderAxis.tickFormat(d.tickFormat);     } 
Read More

Camera and video control with HTML5 form android webview

Leave a Comment

I use this guide : Camera and video control with HTML5

This example work excellent on Google Chrome but i can not make it work on Android webview. I also use permission : android.permission.CAMERA.

1 Answers

Answers 1

Did you remember to add this to your onCreate

WebSettings webSettings = myWebView.getSettings(); webSettings.setJavaScriptEnabled(true); webSettings.setAllowFileAccessFromFileURLs(true); webSettings.setAllowUniversalAccessFromFileURLs(true); 

You are probably missing the last two lines in your code.

Here is a working example

Read More

VBA trigger enter or update lookup values

Leave a Comment

I have the following code:

Sub PrintToCSV()     Dim i As Long, e As Long     i = Worksheets("STATEMENT (2)").Range("$G$6").Value     e = Worksheets("STATEMENT (2)").Range("$G$7").Value      Do While i <= e         Range("K6") = i         Application.Wait (Now + #12:00:01 AM#)          If Range("$X$10").Value > 0 Then             Cells(1, 1).Value = i         End If          i = i + 1     Loop End Sub 

It loops and changes value of Range("K6") as expected. However, the value of Range("K6") updates other cells values (vlookup) when I do it manually, but not with this code. How can I ensure the values of other cells depended on Range("K6") changes with this code?

3 Answers

Answers 1

Just FYI - do not declare like this:

Dim i,e as long 

because for this declaration only "e" is declared as long and "i" as a variant. This may cause problems somewhere later.enter image description here

The correct way is:

Dim i as long Dim e as long 

Answers 2

The problem lays in the type mismatch. Range("K6") value is a long integer, while lookup table stored account numbers as text. Converting text to a number solved the problem.

Answers 3

Here you got a bug in your code because i type was undefined. And this should be fixed with Option Explicit - if we were in pure .

This is a common declaration issue where we assume will read

Dim i,e as long 

as

Dim i as long Dim e as long ... 

Unfortunately it doesn't. It is weird, because it differs from the way it works in :

Declaring Multiple Variables

You can declare several variables in one declaration statement, specifying the variable name for each one, and following each array name with parentheses. Multiple variables are separated by commas.

Dim lastTime, nextTime, allTimes() As Date 

In VBA, to be sure of the type, we can get check the type of the variable that way with TypeName:

Sub getTypes()     Dim i, e As Long     MsgBox "i: " & TypeName(i)     MsgBox "e: " & TypeName(e) End Sub 

give:

i: Empty e: Long 
Read More

Referrer and origin preflight request headers in Safari are not changing when user navigates

Leave a Comment

I have two web pages hosted on a.example.com and b.example. Each web page is including a script with a <script> tag, hosted on another domain and served with correct CORS headers.

At a certain point, user navigates from a.example.com to b.example.com.

Safari has here a strange behavior: the referrer and origin headers in preflight request are filled with a.example.com, making the server sending a bad value in Access-Control-Allow-Origin (and so the script can't be executed).

Is there a way to force Safari browser to send correct origin header in that kind of scenario ?

1 Answers

Answers 1

Does the cache policy for the script include Vary: Origin?

Respectively is there actually a second request after navigating to b.example.com?

If not, there is a chance that Safari is actually serving the script from cache - despite the Access-Control-Allow-Origin policy forbidding it to access the resource. Which is a conforming behavior, if the cache policy isn't configured correctly.

Read More

Sunday, September 25, 2016

Authenticated Route not working for Rspec test

Leave a Comment

I'm following this post about setting up authentication in the routes of my Rails 4 application.

Here is my routes.rb file:

Rails.application.routes.draw do    devise_for :employees, :controllers => { registrations: 'employees/registrations' }   devise_for :clients     authenticate :employee do     resources :quotation_requests, only: [:show, :edit,:index, :update, :destroy]   end    resources :quotation_requests, only: [:new, :create]    get '/dashboard' => 'dashboard#show', as: 'show_dashboard'    root to: 'home#index' end 

Here is my quotation_requests_controller_spec.rb file:

require 'rails_helper'  RSpec.describe QuotationRequestsController, type: :controller do       describe "GET index" do         it "renders :index template" do             get :index             expect(response).to render_template(:index)         end          it "assigns quotation requests to template" do             quotation_requests = FactoryGirl.create_list(:quotation_request, 3)             get :index             expect(assigns(:quotation_requests)).to match_array(quotation_requests)         end      end      describe "GET edit" do         let(:quotation_request) { FactoryGirl.create(:quotation_request)}          it "renders :edit template" do             get :edit, id: quotation_request             expect(response).to render_template(:edit)         end         it "assigns the requested quotation request to template" do             get :edit, id: quotation_request             expect(assigns(:quotation_request)).to eq(quotation_request)         end     end      describe "PUT update" do         let(:quotation_request) { FactoryGirl.create(:quotation_request)}          context "valid data" do             new_text = Faker::Lorem.sentence(word_count=500)             let(:valid_data) { FactoryGirl.attributes_for(:quotation_request, sample_text: new_text)}              it "redirects to quotation_request#showtemplate" do                 put :update, id: quotation_request, quotation_request: valid_data                 expect(response).to redirect_to(quotation_request)             end             it "updates quotation request in the database" do                 put :update, id: quotation_request, quotation_request: valid_data                 quotation_request.reload #need to reload the object because we have just updated it in the database so need to get the new values                 expect(quotation_request.sample_text).to eq(new_text)             end         end          context "invalid data" do             let(:invalid_data) { FactoryGirl.attributes_for(:quotation_request, sample_text: "", number_of_words: 400)}              it "renders the :edit template" do                 put :update, id: quotation_request, quotation_request: invalid_data                 expect(response).to render_template(:edit)             end             it "does not update the quotation_request in the database" do                 put :update, id: quotation_request, quotation_request: invalid_data                 quotation_request.reload                 expect(quotation_request.number_of_words).not_to eq(400)             end         end     end      describe "GET new", new: true do         it "renders :new template" do             get :new             expect(response).to render_template(:new)         end         it "assigns new QuotationRequest to @quotation_request" do             get :new             expect(assigns(:quotation_request)).to be_a_new(QuotationRequest)         end     end      describe "GET show" do          #this test requires that there be a quotation request in the database         let(:quotation_request) { FactoryGirl.create(:quotation_request) }          context 'invalid request' do              it "does not render :show template if an employee or client is not signed in" do                  #setup                 quotation_request =  create(:quotation_request)                  #exercise                 get :show, id: quotation_request                  #verification                 expect(response).to_not render_template(:show)              end          end          context 'valid request' do              sign_in_proofreader              it "renders :show template if an employee or client is signed in" do                  #setup                 quotation_request =  create(:quotation_request)                   #exercise                 get :show, id: quotation_request                  #verification                 expect(response).to render_template(:show)             end              it "assigns requested quotation_request to @quotation_request" do                 get :show, id:  quotation_request                 expect(assigns(:quotation_request)).to eq(quotation_request)             end          end     end      describe "POST create", post: true do         context "valid data" do              let(:valid_data) {FactoryGirl.nested_attributes_for(:quotation_request)}              it "redirects to quotation_requests#show" do                 post :create, quotation_request: valid_data                 expect(response).to redirect_to(quotation_request_path(assigns[:quotation_request]))             end              it "creates new quotation_request in database" do             expect {                 post :create, quotation_request: valid_data                 }.to change(QuotationRequest, :count).by(1)             end         end          context "invalid data" do         let(:invalid_data) {FactoryGirl.nested_attributes_for(:quotation_request).merge(sample_text: 'not enough sample text')}              it "renders :new template" do                 post :create, quotation_request: invalid_data                 expect(response).to render_template(:new)             end              it "doesn't creates new quotation_request in database" do                 expect {                     post :create, quotation_request: invalid_data                 }.not_to change(QuotationRequest, :count)             end         end     end      describe "DELETE destroy" do          let(:quotation_request) { FactoryGirl.create(:quotation_request) }          it "redirects to the quotation request#index" do             delete :destroy, id: quotation_request             expect(response).to redirect_to(quotation_requests_path)         end         it "delets the quotation request from the database" do             delete :destroy, id: quotation_request             expect(QuotationRequest.exists?(quotation_request.id)).to be_falsy         end      end end 

My quotation_requests_controller.rb

class QuotationRequestsController < ApplicationController # before_action :authenticate_employee!, :only => [:show]      def index         @quotation_requests =  QuotationRequest.all     end      def new         @quotation_request = QuotationRequest.new         @quotation_request.build_client     end      def edit         @quotation_request = QuotationRequest.find(params[:id])     end      def create       client = Client.find_or_create(quotation_request_params[:client_attributes])       @quotation_request = QuotationRequest.new(quotation_request_params.except(:client_attributes).merge(client: client))       if @quotation_request.save         ClientMailer.quotation_request_created(client.email, @quotation_request.id).deliver_now         redirect_to @quotation_request, notice: 'Thank you.'       else         render :new       end     end      def show         @quotation_request = QuotationRequest.find(params[:id])     end      def update         @quotation_request = QuotationRequest.find(params[:id])         if @quotation_request.update(quotation_request_params)             redirect_to @quotation_request         else             render :edit         end     end      def destroy         QuotationRequest.destroy(params[:id])         redirect_to quotation_requests_path     end      private      def quotation_request_params         params.require(:quotation_request).permit(:number_of_words, :return_date, :sample_text, :client_attributes => [:first_name, :last_name, :email])     end  end 

I know the routes authentication works because if I test them in the browser I get redirected to the sign_in page. However, the tests don't pass in Rspec.

if I put this code in the quotation_requests_controller.rb:

 before_action :authenticate_employee!, :only => [:show] 

The rspec tests pass. So for some reason Rspec does not register the authentication of the routes.

Here is the output from Rspec for the tests run with the authenticated routes:

QuotationRequestsController   GET index     valid request       renders :index template for signed in employee       assigns quotation requests to template     invalid request       does not render :index template without a signed in employee (FAILED - 1)   GET edit     valid request       renders :edit template with a signed in employee       assigns the requested quotation request to template     invalid request       does not render the :edit template without a signed in employee (FAILED - 2)   PUT update     valid request       valid data         redirects to quotation_request#showtemplate         updates quotation request in the database       invalid data         renders the :edit template         does not update the quotation_request in the database     invalid request       redirects user to the sign in page (FAILED - 3)   GET new     renders :new template     assigns new QuotationRequest to @quotation_request   GET show     invalid request       does not render :show template if an employee or client is not signed in (FAILED - 4)     valid request       renders :show template if an employee or client is signed in       assigns requested quotation_request to @quotation_request   POST create     valid data       redirects to quotation_requests#show       creates new quotation_request in database     invalid data       renders :new template       doesn't creates new quotation_request in database   DELETE destroy     valid request       redirects to the quotation request#index       delets the quotation request from the database     invalid request       does not delete the quotation request without a signed in employee (FAILED - 5)  Failures:    1) QuotationRequestsController GET index invalid request does not render :index template without a signed in employee      Failure/Error: expect(response).to_not render_template(:index)        Didn't expect to render index      # ./spec/controllers/quotation_requests_controller_spec.rb:43:in `block (4 levels) in <top (required)>'      # -e:1:in `<main>'    2) QuotationRequestsController GET edit invalid request does not render the :edit template without a signed in employee      Failure/Error: expect(response).to_not render_template(:edit)        Didn't expect to render edit      # ./spec/controllers/quotation_requests_controller_spec.rb:92:in `block (4 levels) in <top (required)>'      # -e:1:in `<main>'    3) QuotationRequestsController PUT update invalid request redirects user to the sign in page      Failure/Error: expect(response).to_not redirect_to(quotation_request)        Didn't expect to redirect to #<QuotationRequest:0x007fe7eb69c8c0>      # ./spec/controllers/quotation_requests_controller_spec.rb:182:in `block (4 levels) in <top (required)>'      # -e:1:in `<main>'    4) QuotationRequestsController GET show invalid request does not render :show template if an employee or client is not signed in      Failure/Error: expect(response).to_not render_template(:show)        Didn't expect to render show      # ./spec/controllers/quotation_requests_controller_spec.rb:217:in `block (4 levels) in <top (required)>'      # -e:1:in `<main>'    5) QuotationRequestsController DELETE destroy invalid request does not delete the quotation request without a signed in employee      Failure/Error: expect(QuotationRequest.exists?(quotation_request.id)).to be_truthy         expected: truthy value             got: false      # ./spec/controllers/quotation_requests_controller_spec.rb:361:in `block (4 levels) in <top (required)>'      # -e:1:in `<main>'  Finished in 2.11 seconds (files took 1.75 seconds to load) 23 examples, 5 failures  Failed examples:  rspec ./spec/controllers/quotation_requests_controller_spec.rb:37 # QuotationRequestsController GET index invalid request does not render :index template without a signed in employee rspec ./spec/controllers/quotation_requests_controller_spec.rb:83 # QuotationRequestsController GET edit invalid request does not render the :edit template without a signed in employee rspec ./spec/controllers/quotation_requests_controller_spec.rb:171 # QuotationRequestsController PUT update invalid request redirects user to the sign in page rspec ./spec/controllers/quotation_requests_controller_spec.rb:208 # QuotationRequestsController GET show invalid request does not render :show template if an employee or client is not signed in rspec ./spec/cont 

Why do the routes I have written not work in Rspec tests?

1 Answers

Answers 1

I take it you are using rspec-rails in your rails app. Rspec-rails sets up a lot of convenience methods for you, but it also introduces some black-magic, which can lead to some unexpected results - like this.

As you can see here it is explained in the comments for controller specs:

# Supports a simple DSL for specifying behavior of ApplicationController. # Creates an anonymous subclass of ApplicationController and evals the # `body` in that context. Also sets up implicit routes for this # controller, that are separate from those defined in "config/routes.rb". 

I guess the logic here is, controller features are different from routing and should be tested separately (and indeed rspec-rails offers a test group for routing), so we do not need the routes for controller specs, meaning you should be able to test your controller without setting up the routes.

In my oppinion, testing the redirect for unauthenticated users is more of an integration test, since it requires multiple parts of your application to work together and as such should not be tested in the controller context, but rather as a feature in some blackbox test.

You can write integration tests by placing them in one of these directories spec/requests, spec/api, and spec/integration or by explicitely declaring their type with

RSpec.describe "Something", type: :request do 

or place it in spec/features or declare the type as

RSpec.describe "Something", type: :feature do 

depending on which level you want to test the redirect (meaning: only test the request-response cycle, or run it in a simulated browser). Please refer to the documentation for integration tests on the rspec-rails github page for more information.

Read More

Is it possible to launch mobile sensor with html5 but only with android webview?

Leave a Comment

I mean without cordova or other framework. I'm pretty sure i need to write Java code and link it somehow with html5 through the android webview. If it is possible, can get a little example how to connect to the camera or other sensor.

2 Answers

Answers 1

Some of the sensors have a JavaScript API such as geolocation, orientation (gyroscope) and the battery. To access the camera you could use MediaDevices.getUserMedia, however, this is still in an experimental stage and is not supported by all Android devices. For more information refer to this link.

Answers 2

Look into JavascriptInterface

https://developer.android.com/reference/android/webkit/WebView.html https://developer.android.com/guide/webapps/webview.html

Specifically, addJavascriptInterface(java.lang.Object, java.lang.String))

@JavascriptInterface class JsInterface {   public void startCamera() { ... } }  WebView myWebView = (WebView) findViewById(R.id.webview); WebSettings webSettings = myWebView.getSettings(); webSettings.setJavaScriptEnabled(true); webView.addJavascriptInterface(new JsInterface(), "androidInterface"); 

Basically, add the JavascriptInterface, and enable javascript on the web view. Then in your javascript you can detect if the interface exists like so:

if ("undefined" != typeof androidInterface) {     androidInterface.startCamera(); } 

Now in the Java code for startCamera, you can do whatever native stuff you need done.

Read More

Saturday, September 24, 2016

Managing signalR notifications to synchronize client and server (c#)

1 comment

In my web application I want to load all data to client side from the server on power up. After that I want all communication be managed through Signalr - meaning that each update the server will send notification to all clients and they will ask for the updated data.

However, I don't know what to do when the SingalR connection is corrupted and then goes back. I don't want to load all the data all over again. What I want to do is to implement some sort of notifications management on the server side for each disconnected client and whenever the SignalR connection is made again - push to that specific client all the notifications that he has missed.

Our signalR listeners on client side are made on singleton listeners instead of short living controllers, that so we can prevent GET request on each view change and make the application be faster and more user friendly. Because of that approach, new notifications in the background also get handled and processed even when it isn't relevant to the current view the end user is on, like so:

// This service is initialized once only public class Service1 {         static inject = ['$rootScope']     array : Item[];      // This is a singleton!     public constructor ($rootScope){          // Get all items from the server         GetAllItemsFromServer();          // Listener for signalR updates         var listener = $rootScope.$on("ItemsNotificationFromServer", UpdateItems);          $rootScope.$on('destroy', {             // Stop the listener             listener();         })     }         // Getting all the items from the server on each controller creation     GetAllItemsFromServer(){         // Getting the items     }      // Handle the notification from the server     public UpdateItems(event, result) : void          //..     } } 

At the moment what happens for example is that when an end user refreshes the browser (F5) I can not know what SignalR notifications this client has missed during the connection problems and so I load all the data from the server all over again (it sucks).

In order to prevent it I thought of implementing something like this -

namespace MapUsersSample {     public class UserContext : DbContext     {         // All those are cleaned when server is powered up         public DbSet<Connection> Connections { get; set; }         public DbSet<Notification> Notifications {get; set;}     }      public class Connection     {         [Key]         [DatabaseGenerationOptions.None]         public string ConnectionID { get; set; }         public bool Connected { get; set; }          // I fill this when disconnected         public List<Notification> MissedNotifications {get; set;}          public Connection(string id)         {             this.ConnectionID = id;             this.Connected = true;             this.MissedNotifications = new List<Notification>();         }     }      public abstract class Notification()     {         public int Id {get; set;}         public DateTime CreationTime {get; set;}     }      .. // Many notifications implement this }  public class MyHub : Hub {     private readonly DbContext _db;     public class MyHub(DbContext db)     {         this._db = db;     }      // Adding a new connection or updating status to true     public override Task OnConnected()     {         var connection = GetConnection(Context.ConnectionId);          if (connection == null)             _db.Connections.Add(new Connection(Context.ConnectionId));         else              connection.Connected = true;          return base.OnConnected()     }      // Changing connection status to false     public override Task OnDisconnected()     {         var connection = GetConnection(Context.ConnectionId);          if (connection == null)         {             Log("Disconnect error: failed to find a connection with id : " + Context.ConnectionId);             return;         }         else {             connection.Connected = false;         }         return base.OnDisconnected();     }      public override Task OnReconnected()     {        var connection = GetConnection(Context.ConnectionId);          if (connection == null)         {             Log("Reconnect error - failed to find a connection with id : " + Context.ConnectionId);             return;         }         else {             connection.Connected = true;         }          // On reconnect, trying to send to the client all the notifications that he has missed         foreach (var notification in connection.MissedNotifications){             Clients.Client(connection.ConnectionID).handleNotification(notification);         }          return base.OnReconnected();     }      // This method is called from clients that receive a notification     public clientNotified(int connectionId, int notificationId)     {         // Getting the connection         var connection = GetConnection(connectionId);          if (connection == null){             Log("clientNotified error - failed to find a connection with id : " + Context.ConnectionId);             return;         }          // Getting the notification that the client was notified about         var notificationToRemove = _dbConnection.Notifications.FirstOrDefault(n => n.Id == notificationId);          if (notificationToRemove == null)         {             Log("clientNotified error - failed to find notification with id : " + notificationId);             return;         }          // Removing from the missed notifications         connection.MissedNotifications.Remove(notificationToRemove);     }      private Connection GetConnection(int connectionId)      {         return _db.Connections.find(connectionId);     }   }  // Notifications outside of the hub public class Broadcaster {     DbContext _db;     public Broadcaster(DbContext db)     {         _hubContext = GlobalHost.ConnectionManager.GetHubContext<MoveShapeHub>();         _dbConnection = db;     }      public void NotifyClients(Notification notification)     {         var openConnections = _db.Connections.Where(x => x.Connected);         var closedConnections = _db.Connections.Where(x => !x.Connected);          // Adding all notifications to be sent when those connections are back         foreach (var connection in closedConnections){             connection.MissedNotifications.add(notification);         }          // Notifying all open connections         foreach (var connection in openConnections){             _hubContext.Clients.Client(connection.ConnectionID).handleNotification(notification);         }     } }   client side java script:  handleNotification(notification){     hubProxy.Server.clientNotified(hub.connection.id, notification.Id)      // Keep handling the notification here.. } 

I haven't got to test it yet, but before I present this idea to my team, is this approach popular? haven't seen people taking this approach and I wondered why? Are there any risks here?

2 Answers

Answers 1

You should check if the data is actual. It can be Hash or datetime of last change.

When client reconnect you should send actual data hash or datetime of last change to the client.

for example

{ clients: '2016-05-05T09:05:05', orders: '2016-09-20T10:11:11' }  

And the client application will decide what data it needs to update.

On client you can save data to LocalStorage or SessionStorage.

Answers 2

At the moment what happens for example is that when an end user refreshes the browser (F5) I can not know what SignalR notifications this client has missed during the connection problems and so I load all the data from the server all over again (it sucks).

Pressing F5 to refresh browser is a hard reset, all existing SignalR connection would be lost. New connections would be made to get data. Connection problems occur in scenarios when SignalR notices problems with the http connection for e.g. due to temporary network issues. Browser refresh isn't a connection problem, it's an act of a user knowingly recreating a new connection.

So, your code of managing missed notifications would work only for signalR connection issues. I don't think it'll work for browser refresh, but then it's a new connection so you haven't missed anything.

Read More

Where can I load the user information to the session in ASP.NET MVC 5 with windows authentication?

Leave a Comment

I want to use the ASP.NET MVC 5 for my web app. I need use the windows authentication.

If I use the windows authentication where is the best place for reading user information (userid and roles) and store its to the Session?

I have the method for getting the user information by username from the database like this:

public class CurrentUser     {         public int UserId { get; set; }          public string UserName { get; set; }          public Roles Roles { get; set; }     }      public enum Roles     {         Administrator,         Editor,         Reader     }      public class AuthService     {         public CurrentUser GetUserInfo(string userName)         {             var currentUser = new CurrentUser();              //load from DB              return currentUser;         }     } 

1 Answers

Answers 1

First and foremost: never, never, never store user details in the session. Seriously. Just don't do it.

If you're using Windows Auth, the user is in AD. You have use AD to get the user information. Microsoft has an MSDN article describing how this should be done.

The long and short is that you create a subclass of UserIdentity and extend it with the additional properties you want to return on the user:

[DirectoryRdnPrefix("CN")] [DirectoryObjectClass("inetOrgPerson")] public class InetOrgPerson : UserPrincipal {     // Inplement the constructor using the base class constructor.      public InetOrgPerson(PrincipalContext context) : base(context)     {     }      // Implement the constructor with initialization parameters.         public InetOrgPerson(PrincipalContext context,                           string samAccountName,                           string password,                           bool enabled)                          : base(context,                                  samAccountName,                                  password,                                  enabled)     {     }      InetOrgPersonSearchFilter searchFilter;      new public InetOrgPersonSearchFilter AdvancedSearchFilter     {         get         {             if ( null == searchFilter )                 searchFilter = new InetOrgPersonSearchFilter(this);              return searchFilter;         }     }      // Create the mobile phone property.         [DirectoryProperty("mobile")]     public string MobilePhone     {         get         {             if (ExtensionGet("mobile").Length != 1)                 return null;              return (string)ExtensionGet("mobile")[0];         }          set         {             ExtensionSet( "mobile", value );         }     }      ... } 

In the example code above, a property is added to bind to the AD's user's mobile field. This is done by implementing the property as shown utilizing ExtensionSet, and then annotating the property with the DirectoryProperty attribute to tell it what field it binds to.

The DirectoryRdnPrefix and DirectoryObjectClass attributes on the class need to line up with how your AD is set up.

Once this is implemented, then you will be able to get at the values simply by referencing them off User.Identity. For example, User.Identity.MobilePhone would return the mobile field from AD for the user.

Read More

App is crashing after capturing picture using intents

Leave a Comment

My app is crashing after capturing 5 to 6 photos using intents.log cat shows nothing. am unable to find the reason why it is crashing. please help me out.

    private void capturePhoto() {          File root = new File(Environment.getExternalStorageDirectory(), "Feedback");         if (!root.exists()) {             root.mkdirs();         }         File file = new File(root, Constants.PROFILE_IMAGE_NAME + ".jpeg");         Uri outputFileUri = Uri.fromFile(file);           Intent photoPickerIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);         photoPickerIntent.putExtra(MediaStore.EXTRA_OUTPUT, outputFileUri);         photoPickerIntent.putExtra("outputFormat", Bitmap.CompressFormat.JPEG.toString());         photoPickerIntent.putExtra("return-data", true);         photoPickerIntent.putExtra("android.intent.extras.CAMERA_FACING", 1);         startActivityForResult(photoPickerIntent, requestCode);       }       @Override     protected void onActivityResult(int requestCode, int resultCode, Intent data) {         super.onActivityResult(requestCode, resultCode, data);         if (this.requestCode == requestCode && resultCode == RESULT_OK) {              File root = new File(Environment.getExternalStorageDirectory(), "Feedback");             if (!root.exists()) {                 root.mkdirs();             }             File file = new File(root, Constants.PROFILE_IMAGE_NAME+".jpeg");             checkFlowIdisPresent(file);              displayPic();           }     }   private void displayPic() {          String filePath = Environment.getExternalStorageDirectory()                 .getAbsolutePath() + File.separator + "/Feedback/" + Constants.PROFILE_IMAGE_NAME + ".jpeg";         //  Bitmap bmp = BitmapFactory.decodeFile(filePath);         //Bitmap scaled = Bitmap.createScaledBitmap(bmp, 300, 300, true);           File imgFile = new File(filePath);         Bitmap bmp = decodeFile(imgFile);          if (imgFile.exists()) {              dispProfilePic.setImageBitmap(bmp);         } else {             dispProfilePic.setBackgroundResource(R.drawable.user_image);          }     }   private Bitmap decodeFile(File f) {         try {             // Decode image size             BitmapFactory.Options o = new BitmapFactory.Options();             o.inJustDecodeBounds = true;             BitmapFactory.decodeStream(new FileInputStream(f), null, o);              // The new size we want to scale to             final int REQUIRED_SIZE = 70;              // Find the correct scale value. It should be the power of 2.             int scale = 1;             while (o.outWidth / scale / 2 >= REQUIRED_SIZE &&                     o.outHeight / scale / 2 >= REQUIRED_SIZE) {                 scale *= 2;             }              // Decode with inSampleSize             BitmapFactory.Options o2 = new BitmapFactory.Options();             o2.inSampleSize = scale;             return BitmapFactory.decodeStream(new FileInputStream(f), null, o2);         } catch (FileNotFoundException e) {         }         return null;     } 

And above code is for capturing photo and displaying captured picture in ImageView. And am using MI tab.

Edit actually app is not crashing...it becomes white screen and if i press any button then it is crashing and onActivityResult is not executed when it become white screen

New Edit Am able to replicate this. I clicked on Android Monitor in that i clicked Monitor. Then it shows memory utilization of the app when i interacting with app. now in left side bar i clicked terminate application icon. Now the interesting thing is it destroys current activity and moves to previous activity. That previous activity become white screen.

Please help me out guys.

9 Answers

Answers 1

Try this code. I use it in some of my apps :

Launch intent method:

private void launchCamera() {         Intent cameraIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);         startActivityForResult(cameraIntent, CAMERA_PIC_REQUEST);     } 

Capturing result:

@Override     public void onActivityResult(int requestCode, int resultCode, Intent data) {         super.onActivityResult(requestCode, resultCode, data);         try {             if (requestCode == CAMERA_PIC_REQUEST) {                 if (data != null) {                     Bundle extras = data.getExtras();                     if (extras != null) {                         Bitmap thumbnail = (Bitmap) extras.get("data");                         if (thumbnail != null)                             displayPic(thumbnail);                     }                 }             }             } catch (Exception e) {             e.printStackTrace();             }     } 

Answers 2

Well your code fine....

I think you save the image or overwrite image on same path with same name so there is problem with memory. So I recommended you change the name with System.currentTimeMillis() or any random name Instead of Constants.PROFILE_IMAGE_NAME.

And Also check the permission

 <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> 

Also check this permission with run time also...for run time follow this

private static final int REQUEST_RUNTIME_PERMISSION = 123;       if (CheckPermission(demo.this, Manifest.permission.WRITE_EXTERNAL_STORAGE)) {           capturePhoto();      } else {         // you do not have permission go request runtime permissions         RequestPermission(demo.this, Manifest.permission.WRITE_EXTERNAL_STORAGE, REQUEST_RUNTIME_PERMISSION);     }    public void RequestPermission(Activity thisActivity, String Permission, int Code) {         if (ContextCompat.checkSelfPermission(thisActivity,                 Permission)                 != PackageManager.PERMISSION_GRANTED) {             if (ActivityCompat.shouldShowRequestPermissionRationale(thisActivity,                     Permission)) {                 capturePhoto();              } else {                 ActivityCompat.requestPermissions(thisActivity,                         new String[]{Permission},                         Code);             }         }     }      public boolean CheckPermission(Activity context, String Permission) {         if (ContextCompat.checkSelfPermission(context,                 Permission) == PackageManager.PERMISSION_GRANTED) {             return true;         } else {             return false;         }     } 

Answers 3

This happens possibly because the Calling Activity gets killed and then restarted by OS as IMAGE CAPTURE intent deals with huge amount of memory for processing the BITMAP captured via CAMERA.

Solution: Save the file path of the Image and use it when onActivityResult is called. You can use onSavedInstanceState and onRestoreInstanceState methods to save and retrieve the IMAGE_PATH and other fields of the activity.

You can refer to this link for how to use onSavedInstanceState and onRestoreInstanceState

Answers 4

Try to use below code. It works fine for me.

 private static final int REQUEST_CAMERA = 1;   @Override  protected void onActivityResult(int requestCode, int resultCode, Intent data) {     super.onActivityResult(requestCode, resultCode, data);      if (resultCode == RESULT_OK)     {         if (requestCode == REQUEST_CAMERA)         {             Bitmap thumbnail = (Bitmap) data.getExtras().get("data");             ByteArrayOutputStream bytes = new ByteArrayOutputStream();             thumbnail.compress(Bitmap.CompressFormat.JPEG, 90, bytes);              File destination = new File(Environment.getExternalStorageDirectory(), System.currentTimeMillis() + ".jpg");              FileOutputStream fos;              try             {                 destination.createNewFile();                 fos = new FileOutputStream(destination);                 fos.write(bytes.toByteArray());                 fos.close();             }             catch (FileNotFoundException fnfe)             {                 fnfe.printStackTrace();             }             catch (IOException ioe)             {                 ioe.printStackTrace();             }             ivSetImage.setImageBitmap(thumbnail);          }    } } 

In the given code snippet, I have compressed the captured image, due to which app crashing problem is resolved.

In your case, the captured image quality might be high due to which your app is crashing while setting up an image on ImageView.

Just try compressing an image. It will work!

Don't forget to add permission in manifest file.

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> 

Answers 5

Try doing it in an Async task because the issue u facing is due to the hug processing done in UI thread

refer here for more help on Async task implementation

Answers 6

If nothing is displayed on the log cat it is very difficult to speculate anything but please check whether the problem is when using the emulator and not on a real device. You can also check if you can recreate the problem by making the emulator capacity smaller (Ram and internal memory). If that is the case, then increase the memory or ram of your emulator and it should work fine. You then need to work on optimizing you image processing task for lower spec devices.

Hope this helps.

Answers 7

This may be memory problem you are taking photos and storing them in bitmap Check your android Monitor for Memory Detection of APp Just make this method static

private static Bitmap decodeFile(File f) {         try {             // Decode image size             BitmapFactory.Options o = new BitmapFactory.Options();             o.inJustDecodeBounds = true;             BitmapFactory.decodeStream(new FileInputStream(f), null, o);              // The new size we want to scale to             final int REQUIRED_SIZE = 70;              // Find the correct scale value. It should be the power of 2.             int scale = 1;             while (o.outWidth / scale / 2 >= REQUIRED_SIZE &&                     o.outHeight / scale / 2 >= REQUIRED_SIZE) {                 scale *= 2;             }              // Decode with inSampleSize             BitmapFactory.Options o2 = new BitmapFactory.Options();             o2.inSampleSize = scale;             return BitmapFactory.decodeStream(new FileInputStream(f), null, o2);         } catch (FileNotFoundException e) {         }         return null;     } 

Save files with different names like saving with timestamp as name

Answers 8

check your Manifast.xml file permission External Storage

and Camera permission.


if your App run on Marshenter code heremallow check run time permission

Answers 9

Try to use below code:

private void launchCamera() {     Intent cameraIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);     startActivityForResult(cameraIntent, CAMERA_PIC_REQUEST); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) {     super.onActivityResult(requestCode, resultCode, data);     try {         if (requestCode == CAMERA_PIC_REQUEST) {             if (data != null) {                 Bundle extras = data.getExtras();                 if (extras != null) {                     Bitmap thumbnail = (Bitmap) extras.get("data");                     if (thumbnail != null)                         displayPic(thumbnail);                 }             }         }         } catch (Exception e) {         e.printStackTrace();         } } 

http://home2home.vn

Read More