Wednesday, January 31, 2018

Android - protecting in app purchases with server side verification

Leave a Comment

I'm new to android development but created an app and I implemented in-app purchase to remove ads from the app. I just did a very basic implementation and I basically check if the user has purchased the "no_ads" item and if it's true, then no ads are shown. The problem is that I see a lot of "purchases" bein logged on firebase and nothing on play console, which means of course that my users are using those hacking apps. So my question is, how to protect/verify those purchases agains a server so these haking apps are useless? I already have a server that my app uses, so there's no problem about implementing any server side code for me. It would be great if someone could point me to a tutorial. Thanks

3 Answers

Answers 1

My small contribution to reduce fraud in in-app purchases

Signature verification on an external server, on your Android code :

verifySignatureOnServer()

  private boolean verifySignatureOnServer(String data, String signature) {         String retFromServer = "";         URL url;         HttpsURLConnection urlConnection = null;         try {             String urlStr = "https://www.example.com/verify.php?data=" + URLEncoder.encode(data, "UTF-8") + "&signature=" + URLEncoder.encode(signature, "UTF-8");              url = new URL(urlStr);             urlConnection = (HttpsURLConnection) url.openConnection();             InputStream in = urlConnection.getInputStream();             InputStreamReader inRead = new InputStreamReader(in);             retFromServer = convertStreamToString(inRead);          } catch (IOException e) {             e.printStackTrace();         } finally {             if (urlConnection != null) {                 urlConnection.disconnect();             }         }          return retFromServer.equals("good");     } 

convertStreamToString()

 private static String convertStreamToString(java.io.InputStreamReader is) {         java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A");         return s.hasNext() ? s.next() : "";     } 

verify.php on the root directory of web hosting

<?php // get data param $data = $_GET['data'];  // get signature param $signature = $_GET['signature'];  // get key $key_64 = ".... put here the base64 encoded pub key from google play console , all in one line !! ....";    $key =  "-----BEGIN PUBLIC KEY-----\n".         chunk_split($key_64, 64,"\n").        '-----END PUBLIC KEY-----';    //using PHP to create an RSA key $key = openssl_get_publickey($key);   // state whether signature is okay or not $ok = openssl_verify($data, base64_decode($signature), $key, OPENSSL_ALGO_SHA1); if ($ok == 1) {     echo "good"; } elseif ($ok == 0) {     echo "bad"; } else {     die ("fault, error checking signature"); }  // free the key from memory openssl_free_key($key);  ?> 

NOTES:

  • You should encrypt the URL in your java code.

  • Also better to change php file name and url arguments names to something with no sense

Hope it will help ...

Answers 2

  1. Add entry in database when user make in-app purchase.
  2. When user open your app check whether purchase is valid or invalid.
  3. If valid then proceed to next activity otherwise show error message.

Answers 3

For Security you may add Native Code in C++ and do the url/webservive call from it.

For the Inapp verification

login time inapp verification of item purchase and in offline store using native

Call QueryPurchases queryInventoryAsync() method to get the Item purchase

call auth https://accounts.google.com/o/oauth2/token for access_token

call inapp verification

https://www.googleapis.com/androidpublisher/v2/applications/(Packagename)"/purchases/products/"(Sku)"/tokens/(PurchaseToken)?access_token="(access_token)

  • Now you can do server call for your products
Read More

Projects dependencies for custom configuration

Leave a Comment

We are using cocoapods to link different projects together. We have a main project with a target (project1) that has 3 configurations release, debug and a custom duplicate of release, qa.

We have 3 external libraries that project1 depends on that are not cocoapod compatible, let's call those external1, external2 and external3. Those external projects only have 2 configurations, the default release and debug.

Our Podfile looks like this:

platform :ios, '8.0' workspace 'project1.xcworkspace'  pod ... pod ...  target 'project1' target 'project1-cal'  target 'external1' do   project '[...]/external1.xcodeproj', 'qa' => :release end  target 'external2' do   project '[...]/external2.xcodeproj', 'qa' => :release end  target 'external3' do   project '[...]/external3.xcodeproj', 'qa' => :release end 

this setup fails when I try to build for Qa with the following error: Showing All Messages error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool: can't locate file for: -lPods-external1

The only way I can fix this is by manually adding qa configurations to external1, external2 and external3.

Can someone please help with this, by explaining what I am doing wrong? I lack some depth knowledge of how exactly cocoapod work.

N.B: Pods-external*.qa.xcconfig are being properly created by pod install in Target Support Files/Pods-external1 albeit they aren't appearing in xcode and also no qa configuration are being added.

0 Answers

Read More

How do I include a Jquery UI slider in my view?

Leave a Comment

I'm trying to incorporate a slider (http://simeydotme.github.io/jQuery-ui-Slider-Pips/#installation) into my view, which is supposedly part of jquery-ui. I have these in my Gemfile

gem 'jquery-rails' gem 'jquery-ui-rails' 

And when i open Gemfile.lock I see

jquery-rails (4.3.1)   rails-dom-testing (>= 1, < 3)   railties (>= 4.2.0)   thor (>= 0.14, < 2.0) jquery-ui-rails (6.0.1)   railties (>= 3.2.16) 

But although I have this HTML on my page

<div class="slider"></div> 

and I include this Javascript

$('.slider').slider().slider('pips'); 

I get the JS console error

Uncaught TypeError: $(...).slider is not a function 

when my page loads. The documentation says I have to include jQuery 2.1.1 and Jquery UI 2.1.1. I can't tell if I'm doing that properly or not.

Edit: Including content of app/assets/application.js in response to answer given ...

//= require jquery //= require jquery_ujs //= require turbolinks //= require_tree . 

2 Answers

Answers 1

Seems everything is ok, you need some customization like on the assets/application.js top of the file add sorting like

//= require jquery //= require jquery_ujs 

Make sure jquery not rendering twice and your slider library include properly then edit your slider JS like below

(function($){     "use strict";     $(document).on('ready', function(){         $('.slider').slider().slider('pips');     }); }); 

Or like below

$(document).on('turbolinks:load', function(){     $('.slider').slider().slider('pips'); }); 

For refactoring you can use this js on the same page underneath where your slider using <script type="text/javascript"> .... </script> tag

Restart the server after implementing this

Hope it helps

Answers 2

Try this instead: Include the js from within the html

<script src="//code.jquery.com/jquery-3.3.1.min.js"></script> <script src="//code.jquery.com/ui/1.12.1/jquery-ui.min.js"></script> 

See if it works after that.

Read More

Add Bundle Items for specific Scheme?

Leave a Comment

I would like to add some bundle items just for a specific scheme. I could create a run-script, but there i am unable to read out the current scheme.

Is there another chance to add some bundle files only for a specific scheme in xCode 9.x?

3 Answers

Answers 1

In script you can get your current build configuration.

You can duplicate your current build configuration, this will save all your settings for that config. After that rename your new config and refer to it in your scheme you would like to add bundle items to (there is no need to use separate scheme, you can say that, for example, you want to add that resources only for running. In that case just set your new config to Run build configuration of your existing scheme).

After you have all set up you can check on specific config in your run-script like this:

if [ "${CONFIGURATION}" = "BetaDebug" ] || [ "${CONFIGURATION}" = "BetaRelease" ] ; then // Do something specific for that config elif [ "${CONFIGURATION}" = "ProductionDebug" ] || [ "${CONFIGURATION}" = "ProductionRelease" ] ; then // Do something specific for that config fi 

where "${CONFIGURATION}" is config name.

Answers 2

Run the script like this after Copy Bundle Resources phase:

if [ "${CONFIGURATION}" = "Release" ]; then cp -r ${PROJECT_DIR}/Settings/production/Settings.bundle "${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.app" fi 

Answers 3

Unfortunately you can't add a file just for a specific scheme using Xcode 9, but You can create a new target duplicating the other one, and you can configure your particular scheme to build that special target. In this way you can add files just to the special target (and indirectly to the special scheme) For target duplication it's very easy to do. You can take a look at this answer

Here you can find a tutorial on how to use targets to manage development builds Vs production builds.

Hope this answer your question.

Read More

Control statement for removing attributes and classes having opposite effect

Leave a Comment

I'm building a registration page and I want the button to be disabled until all of the inputs pass validation. Well I have all of the native validation logic done (missing values, pattern mismatch, etc...), but I wanted to implement a "username taken/available" piece of validation where the button still wouldn't be enabled until the username had valid inputs for all of their inputs AND supplied a desired username that was not already in use.
I have the server call and all of that all done, my only issue is the actual enabling/disabling of the button and assigning the border classes to the inputs. Here is my code for the response from the AJAX call:

ajax.onload = function() {     if (this.responseText === "taken") {         if (username.classList.contains("taken")) {             return;         } else {             username.classList.remove("successBorder");             username.classList.add("errorBorder");             username.classList.add("taken");         }     } else {         if (!username.checkValidity()) {             username.classList.remove("successBorder");             username.classList.add("errorBorder");             return;         } else {             username.classList.remove("errorBorder");             username.classList.add("successBorder");             username.classList.remove("taken");         }     } } 

And then here is the code for where the button is enabled/disabled that is called on the input event for every input element:

function validate() {      if (document.querySelector("form").checkValidity() && !(username.classList.contains("taken"))) {         registerButton.removeAttribute("disabled");         const ruleSpans = document.querySelectorAll("span[data-rule]");         for (span of ruleSpans) {             span.classList.add("hide");         }         for (input of inputs) {             input.classList.remove("errorBorder");             input.classList.add("successBorder");         }         return;     }      registerButton.setAttribute("disabled", "true");      if (this.checkValidity()) {     // Get rid of the error messages         this.classList.remove("errorBorder");         this.classList.add("successBorder");         const ruleSpans = document.getElementsByClassName(this.id);         for (span of ruleSpans) {             span.classList.add("hide");         }         return;     }  // Adding attention borders and error messages based upon what the issue is     this.classList.remove("successBorder");     this.classList.add("errorBorder");     const ruleSpans = document.getElementsByClassName(this.id);     for (span of ruleSpans) {         span.classList.add("hide");         switch (span.getAttribute("data-rule")) {             case "patternMismatch":                 if (this.validity.patternMismatch) {                     span.classList.remove("hide");                 }                    break;             case "valueMissing":                 if (this.validity.valueMissing) {                     span.classList.remove("hide");                 }                 break;             case "typeMismatch":                 if (this.validity.typeMismatch) {                     span.classList.remove("hide");                 }                 break;         }     } } 

And right now, the disabling/enabling works IF it's the first time on input for that element, but it is "behind" all of the times after the first time. (for example, if the username is taken, the register button is enabled, but if the username is taken, the register button is disabled, the exact opposite of what I want happening).
So I thought, instead of checking for it the correct way (the way I did it in the code !(username.classList.contains("taken"))), I would reverse the logic to look like this: username.classList.contains("taken"). And that works (even though it is logically wrong and incredibly hack-y), EXCEPT for the first time a taken username is had.
What am I doing logically wrong here?

2 Answers

Answers 1

I would suggest you to have a code structure like this

      function serverValidation () {         //make the ajax call here to validate all server validation         //send the success callback handler to 'clientValidations'     }     function clientValidations(){         //validate other form elements that does not require a server request here          //Then submit the form through an ajax form submit         submitFormThroughAjax();     }     function submitFormThroughAjax() {         //submit the form through ajax.     }     function onSubmit(event) {         event.preventDefault();         serverValidation();     }     //Here onSubmit should be attached to the form submit handler.  

Refer:below link to know how to submit a form through ajax.

https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest

This example does all the validations only after the user submits but if you want the errors to be shown instantaneously as the user interacts you need to handle it through specific form element events.

Answers 2

You can directly assign the classes in CSS :invalid pseudo selector is available

Read More

Performant 2D OpenGL graphics in R for fast display of raster image using qtpaint (qt) or rdyncall (SDL/OpenGL) packages?

Leave a Comment

For a real-time interactive Mandelbrot viewer I was making in R & Rcpp+OpenMP & Shiny I am on the lookout for a performant way to display 1920x1080 matrices as raster images in the hope of being able to achieve ca. 5-10 fps (calculating the Mandelbrot images themselves now achieves ca. 20-30 fps at moderate zooms, and certainly scrolling around should go fast). Using image() with option useRaster=TRUE, plot.raster or even grid.raster() still doesn't quite cut it, so I am on the lookout for a more performant option, ideally using OpenGL acceleration.

I noticed that there are qt wrapper packages qtutils and qtpaint http://finzi.psych.upenn.edu/R/library/qtutils/html/sceneDevice.html where you can set argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/qplotView.html again with argument opengl=TRUE and http://finzi.psych.upenn.edu/R/library/qtpaint/html/painting.html.

And I also noticed that one should be able to call SDL and GL/OpenGL functions using the rdyncall package (install from https://cran.r-project.org/src/contrib/Archive/rdyncall/ and SDL from https://www.libsdl.org/download-1.2.php)`, demos available at http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/00Index, e.g. http://hg.dyncall.org/pub/dyncall/bindings/file/87fd9f34eaa0/R/rdyncall/demo/randomfield.R).

Am I correct that with these packages one should be able to display a 2D image raster using opengl acceleration? If so, has anyone any thoughts how to do this (I'm asking because I'm not an expert in either qt or SDL/OpenGL)?

Some timings of non-OpenGL options which are too slow for my application:

# some example data & desired colour mapping of [0-1] ranged data matrix library(RColorBrewer) ncol=1080 cols=colorRampPalette(RColorBrewer::brewer.pal(11, "RdYlBu"))(ncol) colfun=colorRamp(RColorBrewer::brewer.pal(11, "RdYlBu")) col = rgb(colfun(seq(0,1, length.out = ncol)), max = 255) mat=matrix(seq(1:1080)/1080,nrow=1920,ncol=1080,byrow=TRUE) mat2rast = function(mat, col) {   idx = findInterval(mat, seq(0, 1, length.out = length(col)))   colors = col[idx]   rastmat = t(matrix(colors, ncol = ncol(mat), nrow = nrow(mat), byrow = TRUE))   class(rastmat) = "raster"   return(rastmat) } system.time(mat2rast(mat, col)) # 0.24s  # plot.raster method - one of the best? par(mar=c(0, 0, 0, 0)) system.time(plot(mat2rast(mat, col), asp=NA)) # 0.26s  # grid graphics - tie with plot.raster? library(grid) system.time(grid.raster(mat2rast(mat, col),interpolate=FALSE)) # 0.28s  # base R image() par(mar=c(0, 0, 0, 0)) system.time(image(mat,axes=FALSE,useRaster=TRUE,col=cols)) # 0.74s # note Y is flipped to compared to 2 options above - but not so important as I can fill matrix the way I want  # magick - browser viewer, so no good.... # library(magick) # image_read(mat2rast(mat, col))  # imager - doesn't plot in base R graphics device, so this one won't work together with Shiny # If you wouldn't have to press ESC to return control to R this # might have some potential though... library(imager) display(as.cimg(mat2rast(mat, col)))  # ggplot2 - just for the record... df=expand.grid(y=1:1080,x=1:1920) df$z=seq(1,1080)/1080 library(ggplot2) system.time({q <- qplot(data=df,x=x,y=y,fill=z,geom="raster") +                  scale_x_continuous(expand = c(0,0)) +                  scale_y_continuous(expand = c(0,0)) +                 scale_fill_gradientn(colours = cols) +                  theme_void() + theme(legend.position="none"); print(q)}) # 11s  

0 Answers

Read More

Extending custom router to default router across apps in Django Rest Framework

Leave a Comment

I have come across a problem regarding having the API apps seperate, while still being able to use the browsable API for navigation.

I have previously used a seperate routers.py file in my main application containing the following extension of the DefaultRouter.

class DefaultRouter(routers.DefaultRouter):     def extend(self, router):         self.registry.extend(router.registry) 

Followed by adding the other application routers like this:

from . routers import DefaultRouter from app1.urls import router as app1_router  # Default Router mainAppRouter = DefaultRouter() mainAppRouter.extend(app1_router) 

where the app1_router is a new SimpleRouter object.

Now the problem occurs when I want to modify the SimpleRouter and create my own App1Router, such as this

class App1Router(SimpleRouter):      routes = [         Route(             url = r'^{prefix}{trailing_slash}$',             mapping = {                 'get': 'retrieve',                 'post': 'create',                 'patch': 'partial_update',             },             name = '{basename}-user',             initkwargs = {}         ),     ] 

This will not handle my extension correctly. As an example, GET and PATCH are not recognized as allowed methods whenever I extend the router, but when I dont extend, but only use the custom router, everything works fine.

My question is therefor, how can I handle extending custom routers across seperate applications, but still maintain a good browsable API?

0 Answers

Read More

How to reuse ArrayDescriptor?

Leave a Comment

I tried the code below:

public class Abc {      private ArrayDescriptor arrayDesc;      void init() {        connection = //create connection         arrayDesc = ArrayDescriptor.createDescriptor("DBTYPE",connection);     }      void m1() {         conn1 = //create connection         ARRAY array_to_pass1 = new ARRAY( arrayDesc , conn1, idsArray1 );      }      void m2() {         conn2 = //create connection         ARRAY array_to_pass2 = new ARRAY( arrayDesc , conn2, idsArray2 );      }  } 

This code is giving the error below:

table.java.sql.SQLException: Missing descriptor at oracle.sql.DatumWithConnection.assertNotNull(DatumWithConnection.java:103)

How can this be resolved?

2 Answers

Answers 1

ArrayDescriptor is deprecated. Assuming your connection objects are of type OracleConnection, try using createOracleArray instead - something like this:

public class Abc {     void init() {         connection = //create connection     }      void m1() {         conn1 = //create connection         array array_to_pass1 = conn1.createOracleArray(arrayDesc, idsArray1);      }      void m2() {         conn2 = //create connection         array array_to_pass2 = conn2.createOracleArray(arrayDesc, idsArray2);      } } 

Note: Using this method, the arrays will be of type java.sql.Array rather than oracle.sql.ARRAY.

Answers 2

new ARRAY must be called with an ArrayDescriptor that uses the same connection. So what you're trying to do won't work. Note that each connection has a cache of descriptors so creating the descriptor will happen just once per connection.

Read More

PDF as blank page in HTML

Leave a Comment

My problem is, everything is fine opening PDFs using my browsers, until I uploaded a pdf with a form inside. Then, if I embed it, it returns a blank page. But the other pdfs with forms open normally. Please see my code below:

<object data="{{ asset($test->file_path) }}" type="application/pdf" width="100%" height="100%">     <embed src="{{ asset($test->file_path) }}" type='application/pdf'>     <center>         <a href="{{ route('download.test', ['id' => $test->id]) }}" class="btn btn-primary">Please click here to view</a>     </center> </object> 

Note: I've also tried to use <iframe> but still returns blank page.

4 Answers

Answers 1

<a href="{{ route('download.test', ['id' => $test->id] ,['target'=>'_blank']) }}" class="btn btn-primary">Please click here to view</a> 

Answers 2

It's late, and I'm tired, so apologies if I misread the question.

I noticed that the PDF is hosted on a site that doesn't support HTTPS. It showed a blank page if it was embedded on a site using HTTPS, but worked fine when it was using HTTP.

I think you need to either move the PDF to a site that supports HTTPS or make the site hosting the PDF start using HTTPS.

Answers 3

Consider using Objects and Iframes (Rather than Object and Embed)

Something like this should work for you:

<object data="http://foersom.com/net/HowTo/data/OoPdfFormExample.pdf" type="application/pdf" width="100%" height="100%">     <iframe src="http://foersom.com/net/HowTo/data/OoPdfFormExample.pdf" width="100%" height="100%" style="border: none;">         This browser does not support PDFs. Please download the PDF to view it: <a href="/pdf/example.pdf">Download PDF</a>     </iframe> </object> 

This worked when I tested it locally but I can't show JSFiddle since it uses HTTPS. Also, have a look at these examples: https://pdfobject.com/static.html

Answers 4

Not sure if this will work as I am not able to test your case. You can try this, it always works for me. Try replacing http://yoursite.com/the.pdf with the correct path.

<object data="http://yoursite.com/the.pdf" type="application/pdf" width="750px" height="750px">     <embed src="http://yoursite.com/the.pdf" type="application/pdf">         <p>This browser does not support PDFs. Please download the PDF to view it: <a href="http://yoursite.com/the.pdf">Download PDF</a>.</p>     </embed> </object> 
Read More

Tuesday, January 30, 2018

How to add a fragment to an Activity without a container

Leave a Comment

Is that possible I can add a fragment view on the activity view without specifying "fragment" view component in activity's layout xml file? Which function should I look for?

9 Answers

Answers 1

Well, the UI of the fragment has to go somewhere. If you want the entire "content view" to be the fragment, add the fragment to android.R.id.content:

  @Override   protected void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);      if (getSupportFragmentManager().findFragmentById(android.R.id.content)==null) {       getSupportFragmentManager().beginTransaction()         .add(android.R.id.content, new ToDoRosterListFragment())         .commit();     }   } 

Otherwise, somewhere in the activity's view hierarchy, you need a container (usually a FrameLayout) in which to place the fragment's UI. Typically, we do that by putting the container in the layout resource.

Answers 2

@Override protected void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);          getSupportFragmentManager().beginTransaction()             .add(android.R.id.content, MyFragment.newInstance())             .commit();     //Some your code here } 

android.R.id.content is the container of entire app screen. It can be used with Fragment:

The android.R.id.content ID value indicates the ViewGroup of the entire content area of an Activity.

The code above will insert the View created by Fragment into the ViewGroup identified by android.R.id.content.

Answers 3

You need to have a layout in your activity to contain the fragment (preferably a FrameLayout).

<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/container" android:name="com.gdevelopers.movies.movies.FragmentMoreMovies" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior" /> 

Then in activity put this code.

@Override protected void onCreate(Bundle savedInstanceState) {   super.onCreate(savedInstanceState);    getSupportFragmentManager().beginTransaction()     .replace(R.id.container, new YourFragment())     .commit();   } } 

Answers 4

If you don't want to go with @CommonsWare answer, then you'll have to provide a container by code. then function you need is....

setContentView(View) 

Yep. If you check the activity class code, you'll see that setContentView can be called with an INT (layouts are identified with ints), or with Views. Therefore, you can create a viewgroup instance on the fly, keep a reference to it (you would need to do the same with an XML generated view), and add your fragments there. This is possible because XML files are just arguments which the view factory class, Inflater, uses to find which view subclasses has to instantiate, using a set of parameters provided in the XML. And obviously, you can do that by hand. Just pick whatever layout class you want to use, for example, FrameLayout, and:

public class Activity extends AppCompatActivity{  private FrameLayout root;  @Override public void onCreate(Bundle savedInstanceState){     super.onCreate(savedInstanceState);     root = new FrameLayout(this);   root.setLayoutParams(new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT));     setContentView(root); //go on }  } 

Answers 5

Simply, while creating a fragment we have to replace or add fragment's view with a view present in our application. To replace or add a fragment's, we normally add a Framelayout or any other layout view(as a fragmentview container) in activity or in a fragment.

Now, If you want to replace or add a fragment's view without adding a extra view container in your activity. You can simply do it by accessing the view's provided by AppCompatActivity or Activity.

Now, you can create a fragment, without adding a view container in your activity you can create as,

YourFragment fragment = new YourFragment(); transaction = getSupportFragmentManager().beginTransaction(); transaction.replace(android.R.id.content, fragment);  //here, android.R.id.content is a view on which your fragment's view is replaced transaction.commit(); 

Answers 6

use android container instead of custom container like

fragmentTransaction.replace(android.R.id.content,yourFragment); 

Answers 7

1.use getWindow().getDecorView() to get a DecorView(FramLayout)
2.add a container view to DecorView
3.add Fragment to the container view

Answers 8

LinearLayout llRoot = findViewById(R.id.parent);  FragmentManager fragMan = getSupportFragmentManager(); FragmentTransaction fragTransaction = fragMan.beginTransaction(); YourFragment yourFragment = (YourFragment)fragMan.findFragmentByTag("yourFragment"); if (yourFragment == null) {   yourFragment = new YourFragment(); } fragTransaction.replace(llRoot.getId(), yourFragment, "yourFragment"); fragTransaction.commit(); 

llRoot is a LinearLayout which contains different view object of your activity

Answers 9

If you don't want to allocate a specific place in the view to the fragment container you can always use the RelativeLayour. I guess without a container we cant place a fragment in a view.

Read More

How to debug function which is getting called throuh validate_and_run() in R?

Leave a Comment

I want to debug functions in ShadowCAT package. https://github.com/Karel-Kroeze/ShadowCAT/tree/master/R

Take any internal functions from this package, they are getting called via validate_and_run() function. If I go though it I am directly presented an output and I am not able to run through each line of the code I am interested in. What I think validate_and_run() creating an environment to call the functions.

For e.g. I am trying to debug shadowcat function from the package using following code:

library(devtools) install_github("Karel-Kroeze/ShadowCAT") library(ShadowCAT) debug(shadowcat)  alpha_beta <- simulate_testbank(model = "GPCM", number_items = 100,                                  number_dimensions = 3, number_itemsteps = 3) model <- "GPCM" start_items <- list(type = 'fixed', item_keys = c("item33", "item5", "item23"), n = 3) stop_test <- list(min_n = 4, max_n = 30, target = c(.1, .1, .1)) estimator <- "maximum_aposteriori" information_summary <- "posterior_determinant" prior_form <- "normal" prior_parameters <- list(mu = c(0, 0, 0), Sigma = diag(3))  # Initial call: get key of first item to adminster call1 <- shadowcat(answers = NULL, estimate = c(0, 0, 0), variance = as.vector(diag(3) * 25),                     model = model, alpha = alpha_beta$alpha, beta = alpha_beta$beta,                     start_items = start_items, stop_test = stop_test,                     estimator = estimator, information_summary = information_summary,                    prior_form = prior_form, prior_parameters = prior_parameters) 

In above shadowcat() function there ane many internal functions written but I do not see they are getting called anywhere in the shadowcat(). My speculation is that it is getting called in validate_and_run() function.

My question is how can I debug those internal functions inside the shadowcat() and see what each variable is storing and what are the inputs of the internal functions when they are getting called?

EDIT 1:

In any usual R function, when one debugs it, you can move your debugging cursor (yellow highlighted line) line by line by clicking on next in RStudio. Also, once you have gone over that line of code , you can see the value of the variable by printing the variable name on console. This I am not able to do in shadowcat() function. Internal function codes are written but they are never called in visible form. I need to see where they are getting called and need to debug through them

Any leads appreciated.

EDIT 2 Main body of the code:

function (answers, estimate, variance, model, alpha, beta, start_items,      stop_test, estimator, information_summary, prior_form = NULL,      prior_parameters = NULL, guessing = NULL, eta = NULL, constraints_and_characts = NULL,      lower_bound = NULL, upper_bound = NULL, safe_eap = FALSE,      eap_estimation_procedure = "riemannsum")  {     result <- function() {         switch_to_maximum_aposteriori <- estimator == "maximum_likelihood" &&              !is.null(lower_bound) && !is.null(upper_bound)         estimator <- get_estimator(switch_to_maximum_aposteriori = switch_to_maximum_aposteriori)         prior_form <- get_prior_form(switch_to_maximum_aposteriori = switch_to_maximum_aposteriori)         prior_parameters <- get_prior_parameters(switch_to_maximum_aposteriori = switch_to_maximum_aposteriori)         beta <- get_beta()         guessing <- get_guessing()         number_items <- nrow(alpha)         number_dimensions <- ncol(alpha)         number_itemsteps_per_item <- number_non_missing_cells_per_row(beta)         lp_constraints_and_characts <- get_lp_constraints_and_characts(number_items = number_items)         item_keys <- rownames(alpha)         item_keys_administered <- names(answers)         item_keys_available <- get_item_keys_available(item_keys_administered = item_keys_administered,              item_keys = item_keys)         attr(estimate, "variance") <- matrix(variance, ncol = number_dimensions)         estimate <- update_person_estimate(estimate = estimate,              answers_vector = unlist(answers), item_indices_administered = match(item_keys_administered,                  item_keys), number_dimensions = number_dimensions,              alpha = alpha, beta = beta, guessing = guessing,              number_itemsteps_per_item = number_itemsteps_per_item,              estimator = estimator, prior_form = prior_form, prior_parameters = prior_parameters)         continue_test <- !terminate_test(number_answers = length(answers),              estimate = estimate, min_n = stop_test$min_n, max_n = stop_test$max_n,              variance_target = stop_test$target, cutoffs = stop_test$cutoffs)         if (continue_test) {             index_new_item <- get_next_item(start_items = start_items,                  information_summary = information_summary, lp_constraints = lp_constraints_and_characts$lp_constraints,                  lp_characters = lp_constraints_and_characts$lp_chars,                  estimate = estimate, model = model, answers = unlist(answers),                  prior_form = prior_form, prior_parameters = prior_parameters,                  available = match(item_keys_available, item_keys),                  administered = match(item_keys_administered,                    item_keys), number_items = number_items, number_dimensions = number_dimensions,                  estimator = estimator, alpha = alpha, beta = beta,                  guessing = guessing, number_itemsteps_per_item = number_itemsteps_per_item,                  stop_test = stop_test, eap_estimation_procedure = eap_estimation_procedure)             key_new_item <- item_keys[index_new_item]         }         else {             key_new_item <- NULL         }         list(key_new_item = as.scalar2(key_new_item), continue_test = as.scalar2(continue_test),              estimate = as.vector(estimate), variance = as.vector(attr(estimate,                  "variance")), answers = answers)     }     update_person_estimate <- function(estimate, answers_vector,          item_indices_administered, number_dimensions, alpha,          beta, guessing, number_itemsteps_per_item, estimator,          prior_form, prior_parameters) {         if (length(answers) > start_items$n)              estimate_latent_trait(estimate = estimate, answers = answers_vector,                  prior_form = prior_form, prior_parameters = prior_parameters,                  model = model, administered = item_indices_administered,                  number_dimensions = number_dimensions, estimator = estimator,                  alpha = alpha, beta = beta, guessing = guessing,                  number_itemsteps_per_item = number_itemsteps_per_item,                  safe_eap = safe_eap, eap_estimation_procedure = eap_estimation_procedure)         else estimate     }     get_item_keys_available <- function(item_keys_administered,          item_keys) {         if (is.null(item_keys_administered))              item_keys         else item_keys[-which(item_keys %in% item_keys_administered)]     }     get_beta <- function() {         if (model == "GPCM" && is.null(beta) && !is.null(eta))              row_cumsum(eta)         else beta     }     get_guessing <- function() {         if (is.null(guessing))              matrix(0, nrow = nrow(as.matrix(alpha)), ncol = 1,                  dimnames = list(rownames(alpha), NULL))         else guessing     }     get_estimator <- function(switch_to_maximum_aposteriori) {         if (switch_to_maximum_aposteriori)              "maximum_aposteriori"         else estimator     }     get_prior_form <- function(switch_to_maximum_aposteriori) {         if (switch_to_maximum_aposteriori)              "uniform"         else prior_form     }     get_prior_parameters <- function(switch_to_maximum_aposteriori) {         if (switch_to_maximum_aposteriori)              list(lower_bound = lower_bound, upper_bound = upper_bound)         else prior_parameters     }     get_lp_constraints_and_characts <- function(number_items) {         if (is.null(constraints_and_characts))              NULL         else constraints_lp_format(max_n = stop_test$max_n, number_items = number_items,              characteristics = constraints_and_characts$characteristics,              constraints = constraints_and_characts$constraints)     }     validate <- function() {         if (is.null(estimate))              return(add_error("estimate", "is missing"))         if (is.null(variance))              return(add_error("variance", "is missing"))         if (!is.vector(variance))              return(add_error("variance", "should be entered as vector"))         if (sqrt(length(variance)) != round(sqrt(length(variance))))              return(add_error("variance", "should be a covariance matrix turned into a vector"))         if (is.null(model))              return(add_error("model", "is missing"))         if (is.null(alpha))              return(add_error("alpha", "is missing"))         if (is.null(start_items))              return(add_error("start_items", "is missing"))         if (is.null(stop_test))              return(add_error("stop_test", "is missing"))         if (is.null(estimator))              return(add_error("estimator", "is missing"))         if (is.null(information_summary))              return(add_error("information_summary", "is missing"))         if (!is.matrix(alpha) || is.null(rownames(alpha)))              return(add_error("alpha", "should be a matrix with item keys as row names"))         if (!is.null(beta) && (!is.matrix(beta) || is.null(rownames(beta))))              return(add_error("beta", "should be a matrix with item keys as row names"))         if (!is.null(eta) && (!is.matrix(eta) || is.null(rownames(eta))))              return(add_error("eta", "should be a matrix with item keys as row names"))         if (!is.null(guessing) && (!is.matrix(guessing) || ncol(guessing) !=              1 || is.null(rownames(guessing))))              return(add_error("guessing", "should be a single column matrix with item keys as row names"))         if (!is.null(start_items$type) && start_items$type ==              "random_by_dimension" && length(start_items$n_by_dimension) %not_in%              c(1, length(estimate)))              return(add_error("start_items", "length of n_by_dimension should be a scalar or vector of the length of estimate"))         if (!row_names_are_equal(rownames(alpha), list(alpha,              beta, eta, guessing)))              add_error("alpha_beta_eta_guessing", "should have equal row names, in same order")         if (!is.null(beta) && !na_only_end_rows(beta))              add_error("beta", "can only contain NA at the end of rows, no values allowed after an NA in a row")         if (!is.null(eta) && !na_only_end_rows(eta))              add_error("eta", "can only contain NA at the end of rows, no values allowed after an NA in a row")         if (length(estimate) != ncol(alpha))              add_error("estimate", "length should be equal to the number of columns of the alpha matrix")         if (length(estimate)^2 != length(variance))              add_error("variance", "should have a length equal to the length of estimate squared")         if (is.null(answers) && !is.positive.definite(matrix(variance,              ncol = sqrt(length(variance)))))              add_error("variance", "matrix is not positive definite")         if (model %not_in% c("3PLM", "GPCM", "SM", "GRM"))              add_error("model", "of unknown type")         if (model != "GPCM" && is.null(beta))              add_error("beta", "is missing")         if (model == "GPCM" && is.null(beta) && is.null(eta))              add_error("beta_and_eta", "are both missing; define at least one of them")         if (model == "GPCM" && !is.null(beta) && !is.null(eta) &&              !all(row_cumsum(eta) == beta))              add_error("beta_and_eta", "objects do not match")         if (estimator != "maximum_likelihood" && is.null(prior_form))              add_error("prior_form", "is missing")         if (estimator != "maximum_likelihood" && is.null(prior_parameters))              add_error("prior_parameters", "is missing")         if (!is.null(prior_form) && prior_form %not_in% c("normal",              "uniform"))              add_error("prior_form", "of unknown type")         if (!is.null(prior_form) && !is.null(prior_parameters) &&              prior_form == "uniform" && (is.null(prior_parameters$lower_bound) ||              is.null(prior_parameters$upper_bound)))              add_error("prior_form_is_uniform", "so prior_parameters should contain lower_bound and upper_bound")         if (!is.null(prior_form) && !is.null(prior_parameters) &&              prior_form == "normal" && (is.null(prior_parameters$mu) ||              is.null(prior_parameters$Sigma)))              add_error("prior_form_is_normal", "so prior_parameters should contain mu and Sigma")         if (!is.null(prior_parameters$mu) && length(prior_parameters$mu) !=              length(estimate))              add_error("prior_parameters_mu", "should have same length as estimate")         if (!is.null(prior_parameters$Sigma) && (!is.matrix(prior_parameters$Sigma) ||              !all(dim(prior_parameters$Sigma) == c(length(estimate),                  length(estimate))) || !is.positive.definite(prior_parameters$Sigma)))              add_error("prior_parameters_sigma", "should be a square positive definite matrix, with dimensions equal to the length of estimate")         if (!is.null(prior_parameters$lower_bound) && !is.null(prior_parameters$upper_bound) &&              (length(prior_parameters$lower_bound) != length(estimate) ||                  length(prior_parameters$upper_bound) != length(estimate)))              add_error("prior_parameters_bounds", "should contain lower and upper bound of the same length as estimate")         if (is.null(stop_test$max_n))              add_error("stop_test", "contains no max_n")         if (!is.null(stop_test$max_n) && stop_test$max_n > nrow(alpha))              add_error("stop_test_max_n", "is larger than the number of items in the item bank")         if (!is.null(stop_test$max_n) && !is.null(stop_test$cutoffs) &&              (!is.matrix(stop_test$cutoffs) || nrow(stop_test$cutoffs) <                  stop_test$max_n || ncol(stop_test$cutoffs) !=                  length(estimate) || any(is.na(stop_test$cutoffs))))              add_error("stop_test_cutoffs", "should be a matrix without missing values, and number of rows equal to max_n and number of columns equal to the number of dimensions")         if (start_items$n == 0 && information_summary == "posterior_expected_kullback_leibler")              add_error("start_items", "requires n > 0 for posterior expected kullback leibler information summary")         if (!is.null(start_items$type) && start_items$type ==              "random_by_dimension" && length(start_items$n_by_dimension) ==              length(estimate) && start_items$n != sum(start_items$n_by_dimension))              add_error("start_items_n", "contains inconsistent information. Total length of start phase and sum of length per dimension do not match (n != sum(n_by_dimension)")         if (!is.null(start_items$type) && start_items$type ==              "random_by_dimension" && length(start_items$n_by_dimension) ==              1 && start_items$n != sum(rep(start_items$n_by_dimension,              length(estimate))))              add_error("start_items_n", "contains inconsistent information. Total length of start phase and sum of length per dimension do not match")         if (!is.null(stop_test$cutoffs) && !is.matrix(stop_test$cutoffs))              add_error("stop_test", "contains cutoff values in non-matrix format")         if (!all(names(answers) %in% rownames(alpha)))              add_error("answers", "contains non-existing key")         if (estimator %not_in% c("maximum_likelihood", "maximum_aposteriori",              "expected_aposteriori"))              add_error("estimator", "of unknown type")         if (information_summary %not_in% c("determinant", "posterior_determinant",              "trace", "posterior_trace", "posterior_expected_kullback_leibler"))              add_error("information_summary", "of unknown type")         if (estimator == "maximum_likelihood" && information_summary %in%              c("posterior_determinant", "posterior_trace", "posterior_expected_kullback_leibler"))              add_error("estimator_is_maximum_likelihood", "so using a posterior information summary makes no sense")         if (estimator != "maximum_likelihood" && (!is.null(lower_bound) ||              !is.null(upper_bound)))              add_error("bounds", "can only be defined if estimator is maximum likelihood")         if (!is.null(lower_bound) && length(lower_bound) %not_in%              c(1, length(estimate)))              add_error("lower_bound", "length of lower bound should be a scalar or vector of the length of estimate")         if (!is.null(upper_bound) && length(upper_bound) %not_in%              c(1, length(estimate)))              add_error("upper_bound", "length of upper bound should be a scalar or vector of the length of estimate")         if (!no_missing_information(constraints_and_characts$characteristics,              constraints_and_characts$constraints))              add_error("constraints_and_characts", "constraints and characteristics should either be defined both or not at all")         if (!characteristics_correct_format(constraints_and_characts$characteristics,              number_items = nrow(alpha)))              add_error("characteristics", "should be a data frame with number of rows equal to the number of items in the item bank")         if (!constraints_correct_structure(constraints_and_characts$constraints))              add_error("constraints_structure", "should be a list of length three lists, with elements named 'name', 'op', 'target'")         if (!constraints_correct_names(constraints_and_characts$constraints,              constraints_and_characts$characteristics))              add_error("constraints_name_elements", "should be defined as described in the details section of constraints_lp_format()")         if (!constraints_correct_operators(constraints_and_characts$constraints))              add_error("constraints_operator_elements", "should be defined as described in the details section of constraints_lp_format()")         if (!constraints_correct_targets(constraints_and_characts$constraints))              add_error("constraints_target_elements", "should be defined as described in the details section of constraints_lp_format()")     }     invalid_result <- function() {         list(errors = errors())     }     validate_and_run() } 

EDIT 3 validate_and_run() function:

function ()  {     .errors <- list()     add_error <- function(key, value = TRUE) {         .errors[key] <<- value     }     errors <- function() {         .errors     }     validate_and_runner <- function() {         if (exists("validate", parent.frame(), inherits = FALSE))              do.call("validate", list(), envir = parent.frame())         if (exists("test_inner_functions", envir = parent.frame(n = 2),              inherits = FALSE))              get("result", parent.frame())         else if (length(errors()) == 0)              do.call("result", list(), envir = parent.frame())         else do.call("invalid_result", list(), envir = parent.frame())     }     for (n in ls(environment())) assign(n, get(n, environment()),          parent.frame())     do.call("validate_and_runner", list(), envir = parent.frame()) } 

0 Answers

Read More

Emiting websocket message from routes

Leave a Comment

I'm trying to setup my server with websockets so that when I update something via my routes I can also emit a websocket message when something on that route is updated.

The idea is to save something to my Mongo db when someone hits the route /add-team-member for example then emit a message to everyone who is connected via websocket and is a part of whatever websocket room that corresponds with that team.

I've followed the documentation for socket.io to setup my app in the following way:

App.js

// there's a lot of code in here which sets what to use on my app but here's the important lines  const app = express(); const routes = require('./routes/index');  const sessionObj = {     secret: process.env.SECRET,     key: process.env.KEY,     resave: false,     saveUninitialized: false,     store: new MongoStore({ mongooseConnection: mongoose.connection }),              secret : 'test',              cookie:{_expires : Number(process.env.COOKIETIME)}, // time im ms     }  app.use(session(sessionObj)); app.use(passport.initialize()); app.use(passport.session());  module.exports = {app,sessionObj}; 

start.js

const mongoose = require('mongoose'); const passportSocketIo = require("passport.socketio"); const cookieParser = require('cookie-parser');  // import environmental variables from our variables.env file require('dotenv').config({ path: 'variables.env' });  // Connect to our Database and handle an bad connections mongoose.connect(process.env.DATABASE);  // import mongo db models require('./models/user'); require('./models/team');  // Start our app! const app = require('./app'); app.app.set('port', process.env.PORT || 7777);  const server = app.app.listen(app.app.get('port'), () => {   console.log(`Express running → PORT ${server.address().port}`); });  const io = require('socket.io')(server);  io.set('authorization', passportSocketIo.authorize({   cookieParser: cookieParser,   key:         app.sessionObj.key,       // the name of the cookie where express/connect stores its session_id    secret:      app.sessionObj.secret,    // the session_secret to parse the cookie    store:       app.sessionObj.store,        // we NEED to use a sessionstore. no memorystore please    success:     onAuthorizeSuccess,  // *optional* callback on success - read more below    fail:        onAuthorizeFail,     // *optional* callback on fail/error - read more below  }));   function onAuthorizeSuccess(data, accept){}  function onAuthorizeFail(data, message, error, accept){}  io.on('connection', function(client) {     client.on('join', function(data) {       client.emit('messages',"server socket response!!");   });    client.on('getmessage', function(data) {     client.emit('messages',data); });    }); 

My problem is that I have a lot of mongo DB save actions that are going on in my ./routes/index file and I would like to be able to emit message from my routes rather than from the end of start.js where socket.io is connected.

Is there any way that I could emit a websocket message from my ./routes/index file even though IO is setup further down the line in start.js?

for example something like this:

router.get('/add-team-member', (req, res) => {   // some io.emit action here }); 

Maybe I need to move where i'm initializing the socket.io stuff but haven't been able to find any documentation on this or perhaps I can access socket.io from routes already somehow?

Thanks and appreciate the help, let me know if anything is unclear!

5 Answers

Answers 1

You can use emiter-adapter to emit data to client in other process/server. It use redis DB as backend for emitting messages.

Answers 2

As mentioned above, io is in your global scope. If you do

router.get('/add-team-member', (req, res) => {     io.sockets.emit('AddTeamMember'); }); 

Then every client connected, if listening to that event AddTeamMember, will run it's associated .on function on their respective clients. This is probably the easiest solution, and unless you're expecting a huge wave of users without any plans of load balancing, this should be suitable for the time being.

Another alternative you can go: socket.io lib has a rooms functionality where you can join and emit using the io object itself https://socket.io/docs/rooms-and-namespaces/ if you have a knack for this, it'd look something like this:

io.sockets.in('yourroom').broadcast('AddTeamMember'); 

This would essentially do the same thing as the top, only instead of broadcasting to every client, it'd only broadcast to those that are exclusive to that room. You'd have to basically figure out a way to get that users socket into the room //before// they made the get request, or in other words, make them exclusive. That way you can reduce the amount of load your server has to push out whenever that route request is made.

Lastly, if neither of the above options work for you, and you just absolutely have to send to that singular client when they initiate it, then it's going to get messy, because you have to have some sort of id to that person, and since you have no reference, you'd have to store all your sockets upon connection, and then make a comparison. I do not fully recommend something like this, because well, I haven't ever tested it, and don't know what type of repercussions could happen, but here is a jist of an idea I had:

app.set('trust proxy', true) var SOCKETS = [] io.on('connection', function(client) {   SOCKETS.push(client);   client.on('join', function(data) {     client.emit('messages',"server socket response!!");   });    client.on('getmessage', function(data) {     client.emit('messages',data);   }); });  router.get('/add-team-member', (req, res) => {     for (let i=0; i< SOCKETS.length; i++){         if(SOCKETS[i].request.connection.remoteAddress == req.ip)           SOCKETS[i].emit('AddTeamMember');     } }); 

Keep in mind, if you do go down this route, you're gonna need to maintain that array when users disconnect, and if you're doing session management, that's gonna get hairy really really quick.

Good luck, let us know your results.

Answers 3

Yes, it is possible, you just have to attach the instance of socket.io as long as you get a request on your server. Looking to your file start.js you just have to replace your functions as:

// Start our app! const app = require('./app'); app.app.set('port', process.env.PORT || 7777); const io = require('socket.io')(app.app);  const server = app.app.listen(app.app.get('port'), () => { server.on('request', function(request, response){     request.io = io; } console.log(`Express running → PORT ${server.address().port}`); }); 

now when you receive an event that you want to emit some message to the clients you can use your io instance from the request object.

router.get('/add-team-member', (req, res) => {     req.io.sockets.emit('addteammember', {member: 6});     //as you are doing a broadcast you just need broadcast msg     ....     res.status(200)     res.end() }); 

Doing that i also were able to integrate with test framework like mocha, and test the events emited too...

I did some integrations like that, and in my experience the last thing to do was emit the msg to instances in the socket.

As a good practice the very begining of middleware functions i had were doing data validation, data sanitization and cleaning data. Here is my working example:

var app = require('../app'); var server = require('http').Server(app); var io = require('socket.io')(server);  io.on('connection', function(client) {         client.emit('connected');         client.on('disconnect', function() {             console.log('disconnected', client.id);         }); });  server.on('request', function(request, response) {     request.io = io; });  pg.initialize(app.config.DATABASEURL, function(err){   if(err){     throw err;   }    app.set('port', process.env.PORT || 3000);      var server1 = server.listen(app.get('port'), function(){     var host = 'localhost';     var port = server1.address().port;      console.log('Example app listening at http://%s:%s', host, port);   }); }); 

Answers 4

I did something similar in the past, using namespaces.

Let's say your client connect to your server using "Frontend" as the namespace. My solution was to create the instance of socket.io as a class in a separate file:

websockets/index.js

const socket = require('socket.io');  class websockets {   constructor(server) {     this.io = socket(server);     this.frontend = new Frontend(this.io);      this.io.use((socket, next) => {       // put here the logic to authorize your users..       // even better in a separate file :-)       next();     });   } }  class Frontend {   constructor(io) {     this.nsp = io.of('/Frontend');      [ ... ]   } }  module.exports = websockets; 

Then in App.js

const app = require('express')(); const server = require('http').createServer(app); const websockets = require('./websockets/index'); const WS = new websockets(server);  app.use('/', (req, res, next) => {   req.websocket = WS;   next(); }, require('./routes/index'));  [ ... ] 

Finally, your routes can do:

routes/index.js

router.get('/add-team-member', (req, res) => {   req.websocket.frontend.nsp.emit('whatever', { ... });    [ ... ] }); 

Answers 5

Your io is actually the socket object, you can emit events from this object to any specific user by -

io.to(userSocketId).emit('eventName', data); 

Or you can broadcast by -

io.emit('eventName', data); 

Just create require socket.io before using it :)

Read More

Dynamically update PHP generated elements after Ajax login

Leave a Comment

I load content on the page (comments) via Ajax infinite scroll, and i use Ajax to login/out as well. Now on success login i want to update reply or like buttons depending on what that user has liked or disliked.

The simple method is not to use Ajax for login/out or to refresh the page anyway because i check what the user has liked/disliked with PHP and if the page dose not get refreshed those scripts do not fire again.

But if i refresh the page all the comments loaded are gone and the user needs to scroll again. One solution that i found is to use the load() method to refresh the divs with the scripts but I'm not sure that is the way to go. So basically how do i dynamically update elements on the page after Ajax login that are generated from PHP?

Let me explain better:

Let's say i have a PHP script that makes this check:

<?PHP  $q = $db->query("SELECT who_liked FROM likes WHERE(com_liked = com_id AND who_liked = curent_user_id)"); //actual query uses prepared statments, this is for example  $count = $q->rowCount();   if($count > 0){   echo "<style> #like_btn{background-color: green;} </style>";  } ?> 

So if the user is not logged in all the like/dislike buttons are gray. Now the login is done through Ajax, the user uses email/user_name and password to log in, a login sessions is started the user name, profile image are selected from the data base based on the user id and are displayed on the page/navBar. Now i need to make a check to see what that user has liked and so on, should i make this check in the Ajax response? should i use load() to refresh the like/dislikes of the comments and the script that check that? Should i put all the php scripts in the Ajax response so they fire on success login?(witch i think is the way to go)

Ex:

    ,success: function(response) {// php scripts with the sql query for all the checks} 

1 Answers

Answers 1

Try below code,

Send a JSON encoded array on Login response,

<?PHP   $q = $db->query("SELECT who_liked FROM likes WHERE(com_liked = com_id AND who_liked = curent_user_id)"); //actual query uses prepared statments, this is for example  $count = $q->rowCount();   if($count > 0){   $response = ['success' => true];  } else {      $response = ['success' => true];  }   echo json_encode($response);  ?> 

Use below CSS Script,

<style type="text/css">     .like_btn{         background-color: green;     }      .like_btn.liked {         background-color: orange;     }      .like_btn.not-liked {         background-color: grey;     } </style> 

Add

not-liked

use CSS class for not liked buttons as below on PHP page load,

<button class="like_btn not-liked">Like Button</button> 

then you will be able to identify the liked/ disliked and not liked buttons.

Use

liked

CSS class for liked buttons. So at any time you will be able to toggle the buttons using JavaScript.

<button class="like_btn liked">Like Button</button> 

Finally the JQuery AJAX script,

<script type="text/javascript">     $.ajax({         url: '/path/to/file',         type: 'default GET (Other values: POST)',         dataType: 'json',         data: {param1: 'value1'},         success:function(res) {             // On Login Success             if(res.success) {                 $('.like_btn').toggleClass('not-liked');             }         }     }); </script> 

Feel free to leave comments, if you need any further clarifications.

Read More

How to change json encoding behaviour for serializable python object?

Leave a Comment

It is easy to change the format of an object which is not JSON serializable eg datetime.datetime.

My requirement, for debugging purposes, is to alter the way some custom objects extended from base ones like dict and list , get serialized in json format . Code :

import datetime import json  def json_debug_handler(obj):     print("object received:")     print type(obj)     print("\n\n")     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj,mDict):         return {'orig':obj , 'attrs': vars(obj)}     elif isinstance(obj,mList):         return {'orig':obj, 'attrs': vars(obj)}     else:         return None   class mDict(dict):     pass   class mList(list):     pass   def test_debug_json():     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "unprocessed"     test_json = { 'games' : games , 'scores' : scores , 'date': datetime.datetime.now() }     print(json.dumps(test_json,default=json_debug_handler))  if __name__ == '__main__':     test_debug_json() 

DEMO : http://ideone.com/hQJnLy

Output:

{"date": "2013-05-07T01:03:13.098727", "games": ["mario", "contra", "tetris"], "scores": {"pk": 45, "dp": 10}}

Desired output:

{"date": "2013-05-07T01:03:13.098727", "games": { "orig": ["mario", "contra", "tetris"] ,"attrs" : { "src":"console"}} , "scores": { "orig": {"pk": 45, "dp": 10},"attrs": "processed":"unprocessed }}

Does the default handler not work for serializable objects ? If not, how can I override this, without adding toJSON methods to the extended classes ?

Also, there is this version of JSON encoder which does not work :

class JsonDebugEncoder(json.JSONEncoder):     def default(self,obj):         if  isinstance(obj, datetime.datetime):             return obj.isoformat()         elif isinstance(obj,mDict):             return {'orig':obj , 'attrs': vars(obj)}         elif isinstance(obj,mList):             return {'orig':obj, 'attrs': vars(obj)}         else:             return json.JSONEncoder.default(self, obj) 

If there is a hack with pickle,__getstate__,__setstate__,and then using json.dumps over pickle.loads object , I am open to that as well, I tried , but that did not work.

12 Answers

Answers 1

It seems that to achieve the behavior you want, with the given restrictions, you'll have to delve into the JSONEncoder class a little. Below I've written out a custom JSONEncoder that overrides the iterencode method to pass a custom isinstance method to _make_iterencode. It isn't the cleanest thing in the world, but seems to be the best given the options and it keeps customization to a minimum.

# customencoder.py from json.encoder import (_make_iterencode, JSONEncoder,                           encode_basestring_ascii, FLOAT_REPR, INFINITY,                           c_make_encoder, encode_basestring)   class CustomObjectEncoder(JSONEncoder):      def iterencode(self, o, _one_shot=False):         """         Most of the original method has been left untouched.          _one_shot is forced to False to prevent c_make_encoder from         being used. c_make_encoder is a funcion defined in C, so it's easier         to avoid using it than overriding/redefining it.          The keyword argument isinstance for _make_iterencode has been set         to self.isinstance. This allows for a custom isinstance function         to be defined, which can be used to defer the serialization of custom         objects to the default method.         """         # Force the use of _make_iterencode instead of c_make_encoder         _one_shot = False          if self.check_circular:             markers = {}         else:             markers = None         if self.ensure_ascii:             _encoder = encode_basestring_ascii         else:             _encoder = encode_basestring         if self.encoding != 'utf-8':             def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):                 if isinstance(o, str):                     o = o.decode(_encoding)                 return _orig_encoder(o)          def floatstr(o, allow_nan=self.allow_nan,                      _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY):             if o != o:                 text = 'NaN'             elif o == _inf:                 text = 'Infinity'             elif o == _neginf:                 text = '-Infinity'             else:                 return _repr(o)              if not allow_nan:                 raise ValueError(                     "Out of range float values are not JSON compliant: " +                     repr(o))              return text          # Instead of forcing _one_shot to False, you can also just         # remove the first part of this conditional statement and only         # call _make_iterencode         if (_one_shot and c_make_encoder is not None                 and self.indent is None and not self.sort_keys):             _iterencode = c_make_encoder(                 markers, self.default, _encoder, self.indent,                 self.key_separator, self.item_separator, self.sort_keys,                 self.skipkeys, self.allow_nan)         else:             _iterencode = _make_iterencode(                 markers, self.default, _encoder, self.indent, floatstr,                 self.key_separator, self.item_separator, self.sort_keys,                 self.skipkeys, _one_shot, isinstance=self.isinstance)         return _iterencode(o, 0) 

You can now subclass the CustomObjectEncoder so it correctly serializes your custom objects. The CustomObjectEncoder can also do cool stuff like handle nested objects.

# test.py import json import datetime from customencoder import CustomObjectEncoder   class MyEncoder(CustomObjectEncoder):      def isinstance(self, obj, cls):         if isinstance(obj, (mList, mDict)):             return False         return isinstance(obj, cls)      def default(self, obj):         """         Defines custom serialization.          To avoid circular references, any object that will always fail         self.isinstance must be converted to something that is         deserializable here.         """         if isinstance(obj, datetime.datetime):             return obj.isoformat()         elif isinstance(obj, mDict):             return {"orig": dict(obj), "attrs": vars(obj)}         elif isinstance(obj, mList):             return {"orig": list(obj), "attrs": vars(obj)}         else:             return None   class mList(list):     pass   class mDict(dict):     pass   def main():     zelda = mList(['zelda'])     zelda.src = "oldschool"     games = mList(['mario', 'contra', 'tetris', zelda])     games.src = 'console'     scores = mDict({'dp': 10, 'pk': 45})     scores.processed = "unprocessed"     test_json = {'games': games, 'scores': scores,                  'date': datetime.datetime.now()}     print(json.dumps(test_json, cls=MyEncoder))  if __name__ == '__main__':     main() 

Answers 2

The answer by FastTurtle might be a much cleaner solution.

Here's something close to what you want based on the technique as explained in my question/answer: Overriding nested JSON encoding of inherited default supported objects like dict, list

import json import datetime   class mDict(dict):     pass   class mList(list):     pass   class JsonDebugEncoder(json.JSONEncoder):     def _iterencode(self, o, markers=None):         if isinstance(o, mDict):             yield '{"__mDict__": '             # Encode dictionary             yield '{"orig": '             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers):                 yield chunk             yield ', '             # / End of Encode dictionary             # Encode attributes             yield '"attr": '             for key, value in o.__dict__.iteritems():                 yield '{"' + key + '": '                 for chunk in super(JsonDebugEncoder, self)._iterencode(value, markers):                     yield chunk                 yield '}'             yield '}'             # / End of Encode attributes             yield '}'         elif isinstance(o, mList):             yield '{"__mList__": '             # Encode list             yield '{"orig": '             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers):                 yield chunk             yield ', '             # / End of Encode list             # Encode attributes             yield '"attr": '             for key, value in o.__dict__.iteritems():                 yield '{"' + key + '": '                 for chunk in super(JsonDebugEncoder, self)._iterencode(value, markers):                     yield chunk                 yield '}'             yield '}'             # / End of Encode attributes             yield '}'         else:             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers=markers):                 yield chunk      def default(self, obj):         if isinstance(obj, datetime.datetime):             return obj.isoformat()   class JsonDebugDecoder(json.JSONDecoder):     def decode(self, s):         obj = super(JsonDebugDecoder, self).decode(s)         obj = self.recursiveObjectDecode(obj)         return obj      def recursiveObjectDecode(self, obj):         if isinstance(obj, dict):             decoders = [("__mList__", self.mListDecode),                         ("__mDict__", self.mDictDecode)]             for placeholder, decoder in decoders:                 if placeholder in obj:                  # We assume it's supposed to be converted                     return decoder(obj[placeholder])                 else:                     for k in obj:                         obj[k] = self.recursiveObjectDecode(obj[k])         elif isinstance(obj, list):             for x in range(len(obj)):                 obj[x] = self.recursiveObjectDecode(obj[x])         return obj      def mDictDecode(self, o):         res = mDict()         for key, value in o['orig'].iteritems():             res[key] = self.recursiveObjectDecode(value)         for key, value in o['attr'].iteritems():             res.__dict__[key] = self.recursiveObjectDecode(value)         return res      def mListDecode(self, o):         res = mList()         for value in o['orig']:             res.append(self.recursiveObjectDecode(value))         for key, value in o['attr'].iteritems():             res.__dict__[key] = self.recursiveObjectDecode(value)         return res   def test_debug_json():     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "unprocessed"     test_json = { 'games' : games, 'scores' : scores ,'date': datetime.datetime.now() }     jsonDump = json.dumps(test_json, cls=JsonDebugEncoder)     print jsonDump     test_pyObject = json.loads(jsonDump, cls=JsonDebugDecoder)     print test_pyObject  if __name__ == '__main__':     test_debug_json() 

This results in:

{"date": "2013-05-06T22:28:08.967000", "games": {"__mList__": {"orig": ["mario", "contra", "tetris"], "attr": {"src": "console"}}}, "scores": {"__mDict__": {"orig": {"pk": 45, "dp": 10}, "attr": {"processed": "unprocessed"}}}} 

This way you can encode it and decode it back to the python object it came from.

EDIT:

Here's a version that actually encodes it to the output you wanted and can decode it as well. Whenever a dictionary contains 'orig' and 'attr' it will check if 'orig' contains a dictionary or a list, if so it will respectively convert the object back to the mDict or mList.

import json import datetime   class mDict(dict):     pass   class mList(list):     pass   class JsonDebugEncoder(json.JSONEncoder):     def _iterencode(self, o, markers=None):         if isinstance(o, mDict):    # Encode mDict             yield '{"orig": '             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers):                 yield chunk             yield ', '             yield '"attr": '             for key, value in o.__dict__.iteritems():                 yield '{"' + key + '": '                 for chunk in super(JsonDebugEncoder, self)._iterencode(value, markers):                     yield chunk                 yield '}'             yield '}'             # / End of Encode attributes         elif isinstance(o, mList):    # Encode mList             yield '{"orig": '             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers):                 yield chunk             yield ', '             yield '"attr": '             for key, value in o.__dict__.iteritems():                 yield '{"' + key + '": '                 for chunk in super(JsonDebugEncoder, self)._iterencode(value, markers):                     yield chunk                 yield '}'             yield '}'         else:             for chunk in super(JsonDebugEncoder, self)._iterencode(o, markers=markers):                 yield chunk      def default(self, obj):         if isinstance(obj, datetime.datetime):    # Encode datetime             return obj.isoformat()   class JsonDebugDecoder(json.JSONDecoder):     def decode(self, s):         obj = super(JsonDebugDecoder, self).decode(s)         obj = self.recursiveObjectDecode(obj)         return obj      def recursiveObjectDecode(self, obj):         if isinstance(obj, dict):             if "orig" in obj and "attr" in obj and isinstance(obj["orig"], list):                 return self.mListDecode(obj)             elif "orig" in obj and "attr" in obj and isinstance(obj['orig'], dict):                 return self.mDictDecode(obj)             else:                 for k in obj:                     obj[k] = self.recursiveObjectDecode(obj[k])         elif isinstance(obj, list):             for x in range(len(obj)):                 obj[x] = self.recursiveObjectDecode(obj[x])         return obj      def mDictDecode(self, o):         res = mDict()         for key, value in o['orig'].iteritems():             res[key] = self.recursiveObjectDecode(value)         for key, value in o['attr'].iteritems():             res.__dict__[key] = self.recursiveObjectDecode(value)         return res      def mListDecode(self, o):         res = mList()         for value in o['orig']:             res.append(self.recursiveObjectDecode(value))         for key, value in o['attr'].iteritems():             res.__dict__[key] = self.recursiveObjectDecode(value)         return res   def test_debug_json():     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "unprocessed"     test_json = { 'games' : games, 'scores' : scores ,'date': datetime.datetime.now() }     jsonDump = json.dumps(test_json, cls=JsonDebugEncoder)     print jsonDump     test_pyObject = json.loads(jsonDump, cls=JsonDebugDecoder)     print test_pyObject     print test_pyObject['games'].src  if __name__ == '__main__':     test_debug_json() 

Here's some more info about the output:

# Encoded {"date": "2013-05-06T22:41:35.498000", "games": {"orig": ["mario", "contra", "tetris"], "attr": {"src": "console"}}, "scores": {"orig": {"pk": 45, "dp": 10}, "attr": {"processed": "unprocessed"}}}  # Decoded ('games' contains the mList with the src attribute and 'scores' contains the mDict processed attribute) # Note that printing the python objects doesn't directly show the processed and src attributes, as seen below. {u'date': u'2013-05-06T22:41:35.498000', u'games': [u'mario', u'contra', u'tetris'], u'scores': {u'pk': 45, u'dp': 10}} 

Sorry for any bad naming conventions, it's a quick setup. ;)

Note: The datetime doesn't get decoded back to the python representation. Implementing that could be done by checking for any dict key that is called 'date' and contains a valid string representation of a datetime.

Answers 3

As others have pointed out already, the default handler only gets called for values that aren't one of the recognised types. My suggested solution to this problem is to preprocess the object you want to serialize, recursing over lists, tuples and dictionaries, but wrapping every other value in a custom class.

Something like this:

def debug(obj):     class Debug:         def __init__(self,obj):             self.originalObject = obj     if obj.__class__ == list:         return [debug(item) for item in obj]     elif obj.__class__ == tuple:         return (debug(item) for item in obj)     elif obj.__class__ == dict:         return dict((key,debug(obj[key])) for key in obj)     else:         return Debug(obj) 

You would call this function, before passing your object to json.dumps, like this:

test_json = debug(test_json) print(json.dumps(test_json,default=json_debug_handler)) 

Note that this code is checking for objects whose class exactly matches a list, tuple or dictionary, so any custom objects that are extended from those types will be wrapped rather than parsed. As a result, the regular lists, tuples, and dictionaries will be serialized as usual, but all other values will be passed on to the default handler.

The end result of all this, is that every value that reaches the the default handler is guaranteed to be wrapped in one of these Debug classes. So the first thing you are going to want to do is extract the original object, like this:

obj = obj.originalObject 

You can then check the original object's type and handle whichever types need special processing. For everything else, you should just return the original object (so the last return from the handler should be return obj not return None).

def json_debug_handler(obj):     obj = obj.originalObject      # Add this line     print("object received:")     print type(obj)     print("\n\n")     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj,mDict):         return {'orig':obj, 'attrs': vars(obj)}     elif isinstance(obj,mList):         return {'orig':obj, 'attrs': vars(obj)}     else:         return obj                # Change this line 

Note that this code doesn't check for values that aren't serializable. These will fall through the final return obj, then will be rejected by the serializer and passed back to the default handler again - only this time without the Debug wrapper.

If you need to deal with that scenario, you could add a check at the top of the handler like this:

if not hasattr(obj, 'originalObject'):     return None 

Ideone demo: http://ideone.com/tOloNq

Answers 4

The default function is only called when the node being dumped isn't natively serializable, and your mDict classes serialize as-is. Here's a little demo that shows when default is called and when not:

import json  def serializer(obj):     print 'serializer called'     return str(obj)  class mDict(dict):     pass  class mSet(set):     pass  d = mDict(dict(a=1)) print json.dumps(d, default=serializer)  s = mSet({1, 2, 3,}) print json.dumps(s, default=serializer) 

And the output:

{"a": 1} serializer called "mSet([1, 2, 3])" 

Note that sets are not natively serializable, but dicts are.

Since your m___ classes are serializable, your handler is never called.

Update #1 -----

You could change JSON encoder code. The details of how to do this depend on which JSON implementation you're using. For example in simplejson, the relevant code is this, in encode.py:

def _iterencode(o, _current_indent_level):     ...         for_json = _for_json and getattr(o, 'for_json', None)         if for_json and callable(for_json):             ...         elif isinstance(o, list):             ...         else:             _asdict = _namedtuple_as_object and getattr(o, '_asdict', None)             if _asdict and callable(_asdict):                 for chunk in _iterencode_dict(_asdict(),                         _current_indent_level):                     yield chunk             elif (_tuple_as_array and isinstance(o, tuple)):                 ...             elif isinstance(o, dict):                 ...             elif _use_decimal and isinstance(o, Decimal):                 ...             else:                 ...                 o = _default(o)                 for chunk in _iterencode(o, _current_indent_level):                     yield chunk                 ... 

In other words, there is a hard-wired behavior that calls default only when the node being encoded isn't one of the recognized base types. You could override this in one of several ways:

1 -- subclass JSONEncoder as you've done above, but add a parameter to its initializer that specifies the function to be used in place of the standard _make_iterencode, in which you add a test that would call default for classes that meet your criteria. This is a clean approach since you aren't changing the JSON module, but you would be reiterating a lot of code from the original _make_iterencode. (Other variations on this approach include monkeypatching _make_iterencode or its sub-function _iterencode_dict).

2 -- alter the JSON module source, and use the __debug__ constant to change behavior:

def _iterencode(o, _current_indent_level):     ...         for_json = _for_json and getattr(o, 'for_json', None)         if for_json and callable(for_json):             ...         elif isinstance(o, list):             ...         ## added code below         elif __debug__:             o = _default(o)             for chunk in _iterencode(o, _current_indent_level):                 yield chunk         ## added code above         else:             ... 

Ideally the JSONEncoder class would provide a parameter to specify "use default for all types", but it doesn't. The above is a simple one-time change that does what you're looking for.

Answers 5

Why can't you just create a new object type to pass to the encoder? Try:

class MStuff(object):     def __init__(self, content):         self.content = content  class mDict(MStuff):     pass  class mList(MStuff):     pass  def json_debug_handler(obj):     print("object received:")     print(type(obj))     print("\n\n")     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj,MStuff):         attrs = {}         for key in obj.__dict__:             if not ( key.startswith("_") or key == "content"):                 attrs[key] = obj.__dict__[key]          return {'orig':obj.content , 'attrs': attrs}     else:         return None 

You could add validation on the mDict and mList if desired.

Answers 6

If you define these to override __instancecheck__:

def strict_check(builtin):     '''creates a new class from the builtin whose instance check     method can be overridden to renounce particular types'''     class BuiltIn(type):         def __instancecheck__(self, other):             print 'instance', self, type(other), other             if type(other) in strict_check.blacklist:                 return False             return builtin.__instancecheck__(other)     # construct a class, whose instance check method is known.     return BuiltIn('strict_%s' % builtin.__name__, (builtin,), dict())  # for safety, define it here. strict_check.blacklist = () 

then patch json.encoder like this to override _make_iterencode.func_defaults:

# modify json encoder to use some new list/dict attr. import json.encoder # save old stuff, never know when you need it. old_defaults = json.encoder._make_iterencode.func_defaults old_encoder = json.encoder.c_make_encoder encoder_defaults = list(json.encoder._make_iterencode.func_defaults) for index, default in enumerate(encoder_defaults):     if default in (list, dict):         encoder_defaults[index] = strict_check(default)  # change the defaults for _make_iterencode. json.encoder._make_iterencode.func_defaults = tuple(encoder_defaults) # disable C extension. json.encoder.c_make_encoder = None 

... your example would almost work verbatim:

import datetime import json  def json_debug_handler(obj):     print("object received:")     print type(obj)     print("\n\n")     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj,mDict):         # degrade obj to more primitive dict()         # to avoid cycles in the encoding.         return {'orig': dict(obj) , 'attrs': vars(obj)}     elif isinstance(obj,mList):         # degrade obj to more primitive list()         # to avoid cycles in the encoding.         return {'orig': list(obj), 'attrs': vars(obj)}     else:         return None   class mDict(dict):     pass   class mList(list):     pass  # set the stuff we want to process differently. strict_check.blacklist = (mDict, mList)  def test_debug_json():     global test_json     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "unprocessed"     test_json = { 'games' : games , 'scores' : scores , 'date': datetime.datetime.now() }     print(json.dumps(test_json,default=json_debug_handler))  if __name__ == '__main__':     test_debug_json() 

The things I needed to change were to make sure there were no cycles:

    elif isinstance(obj,mDict):         # degrade obj to more primitive dict()         # to avoid cycles in the encoding.         return {'orig': dict(obj) , 'attrs': vars(obj)}     elif isinstance(obj,mList):         # degrade obj to more primitive list()         # to avoid cycles in the encoding.         return {'orig': list(obj), 'attrs': vars(obj)} 

and add this somewhere before test_debug_json:

# set the stuff we want to process differently. strict_check.blacklist = (mDict, mList) 

here is my console output:

>>> test_debug_json() instance <class '__main__.strict_list'> <type 'dict'> {'date': datetime.datetime(2013, 7, 17, 12, 4, 40, 950637), 'games': ['mario', 'contra', 'tetris'], 'scores': {'pk': 45, 'dp': 10}} instance <class '__main__.strict_dict'> <type 'dict'> {'date': datetime.datetime(2013, 7, 17, 12, 4, 40, 950637), 'games': ['mario', 'contra', 'tetris'], 'scores': {'pk': 45, 'dp': 10}} instance <class '__main__.strict_list'> <type 'datetime.datetime'> 2013-07-17 12:04:40.950637 instance <class '__main__.strict_dict'> <type 'datetime.datetime'> 2013-07-17 12:04:40.950637 instance <class '__main__.strict_list'> <type 'datetime.datetime'> 2013-07-17 12:04:40.950637 instance <class '__main__.strict_dict'> <type 'datetime.datetime'> 2013-07-17 12:04:40.950637 object received: <type 'datetime.datetime'>    instance <class '__main__.strict_list'> <class '__main__.mList'> ['mario', 'contra', 'tetris'] instance <class '__main__.strict_dict'> <class '__main__.mList'> ['mario', 'contra', 'tetris'] instance <class '__main__.strict_list'> <class '__main__.mList'> ['mario', 'contra', 'tetris'] instance <class '__main__.strict_dict'> <class '__main__.mList'> ['mario', 'contra', 'tetris'] object received: <class '__main__.mList'>    instance <class '__main__.strict_list'> <type 'dict'> {'attrs': {'src': 'console'}, 'orig': ['mario', 'contra', 'tetris']} instance <class '__main__.strict_dict'> <type 'dict'> {'attrs': {'src': 'console'}, 'orig': ['mario', 'contra', 'tetris']} instance <class '__main__.strict_list'> <type 'dict'> {'src': 'console'} instance <class '__main__.strict_dict'> <type 'dict'> {'src': 'console'} instance <class '__main__.strict_list'> <type 'list'> ['mario', 'contra', 'tetris'] instance <class '__main__.strict_list'> <class '__main__.mDict'> {'pk': 45, 'dp': 10} instance <class '__main__.strict_dict'> <class '__main__.mDict'> {'pk': 45, 'dp': 10} instance <class '__main__.strict_list'> <class '__main__.mDict'> {'pk': 45, 'dp': 10} instance <class '__main__.strict_dict'> <class '__main__.mDict'> {'pk': 45, 'dp': 10} object received: <class '__main__.mDict'>    instance <class '__main__.strict_list'> <type 'dict'> {'attrs': {'processed': 'unprocessed'}, 'orig': {'pk': 45, 'dp': 10}} instance <class '__main__.strict_dict'> <type 'dict'> {'attrs': {'processed': 'unprocessed'}, 'orig': {'pk': 45, 'dp': 10}} instance <class '__main__.strict_list'> <type 'dict'> {'processed': 'unprocessed'} instance <class '__main__.strict_dict'> <type 'dict'> {'processed': 'unprocessed'} instance <class '__main__.strict_list'> <type 'dict'> {'pk': 45, 'dp': 10} instance <class '__main__.strict_dict'> <type 'dict'> {'pk': 45, 'dp': 10} {"date": "2013-07-17T12:04:40.950637", "games": {"attrs": {"src": "console"}, "orig": ["mario", "contra", "tetris"]}, "scores": {"attrs": {"processed": "unprocessed"}, "orig": {"pk": 45, "dp": 10}}} 

Answers 7

Try the below. It produces the output you want and looks relatively simple. The only real difference from your encoder class is that we should override both decode and encode methods (since the latter is still called for types the encoder knows how to handle).

import json import datetime  class JSONDebugEncoder(json.JSONEncoder):     # transform objects known to JSONEncoder here     def encode(self, o, *args, **kw):         for_json = o         if isinstance(o, mDict):             for_json = { 'orig' : o, 'attrs' : vars(o) }         elif isinstance(o, mList):             for_json = { 'orig' : o, 'attrs' : vars(o) }         return super(JSONDebugEncoder, self).encode(for_json, *args, **kw)      # handle objects not known to JSONEncoder here     def default(self, o, *args, **kw):         if isinstance(o, datetime.datetime):             return o.isoformat()         else:             return super(JSONDebugEncoder, self).default(o, *args, **kw)   class mDict(dict):     pass  class mList(list):     pass  def test_debug_json():     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "unprocessed"     test_json = { 'games' : games , 'scores' : scores , 'date': datetime.datetime.now() }     print(json.dumps(test_json,cls=JSONDebugEncoder))  if __name__ == '__main__':     test_debug_json() 

Answers 8

If you are able to change the way json.dumps is called. You can do all the processing required before the JSON encoder gets his hands on it. This version does not use any kind of copying and will edit the structures in-place. You can add copy() if required.

import datetime import json import collections   def json_debug_handler(obj):     print("object received:")     print type(obj)     print("\n\n")     if isinstance(obj, collections.Mapping):         for key, value in obj.iteritems():             if isinstance(value, (collections.Mapping, collections.MutableSequence)):                 value = json_debug_handler(value)              obj[key] = convert(value)     elif isinstance(obj, collections.MutableSequence):         for index, value in enumerate(obj):             if isinstance(value, (collections.Mapping, collections.MutableSequence)):                 value = json_debug_handler(value)              obj[index] = convert(value)     return obj  def convert(obj):     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj,mDict):         return {'orig':obj , 'attrs': vars(obj)}     elif isinstance(obj,mList):         return {'orig':obj, 'attrs': vars(obj)}     else:         return obj   class mDict(dict):     pass   class mList(list):     pass   def test_debug_json():     games = mList(['mario','contra','tetris'])     games.src = 'console'     scores = mDict({'dp':10,'pk':45})     scores.processed = "qunprocessed"     test_json = { 'games' : games , 'scores' : scores , 'date': datetime.datetime.now() }     print(json.dumps(json_debug_handler(test_json)))  if __name__ == '__main__':     test_debug_json() 

You call json_debug_handler on the object you are serializing before passing it to the json.dumps. With this pattern you could also easily reverse the changes and/or add extra conversion rules.

edit:

If you can't change how json.dumps is called, you can always monkeypatch it to do what you want. Such as doing this:

json.dumps = lambda obj, *args, **kwargs: json.dumps(json_debug_handler(obj), *args, **kwargs) 

Answers 9

You should be able to override JSONEncoder.encode():

class MyEncoder(JSONEncoder):   def encode(self, o):     if isinstance(o, dict):       # directly call JSONEncoder rather than infinite-looping through self.encode()       return JSONEncoder.encode(self, {'orig': o, 'attrs': vars(o)})     elif isinstance(o, list):       return JSONEncoder.encode(self, {'orig': o, 'attrs': vars(o)})     else:       return JSONEncoder.encode(self, o) 

and then if you want to patch it into json.dumps it looks from http://docs.buildbot.net/latest/reference/json-pysrc.html like you'll need to replace json._default_encoder with an instance of MyEncoder.

Answers 10

If you are only looking for serialization and not deserialization then you can process the object before sending it to json.dumps. See below example

import datetime import json   def is_inherited_from(obj, objtype):     return isinstance(obj, objtype) and not type(obj).__mro__[0] == objtype   def process_object(data):     if isinstance(data, list):         if is_inherited_from(data, list):             return process_object({"orig": list(data), "attrs": vars(data)})         new_data = []         for d in data:             new_data.append(process_object(d))     elif isinstance(data, tuple):         if is_inherited_from(data, tuple):             return process_object({"orig": tuple(data), "attrs": vars(data)})         new_data = []         for d in data:             new_data.append(process_object(d))         return tuple(new_data)     elif isinstance(data, dict):         if is_inherited_from(data, dict):             return process_object({"orig": list(data), "attrs": vars(data)})         new_data = {}         for k, v in data.items():             new_data[k] = process_object(v)     else:         return data     return new_data   def json_debug_handler(obj):     print("object received:")     print("\n\n")     if isinstance(obj, datetime.datetime):         return obj.isoformat()   class mDict(dict):     pass   class mList(list):     pass   def test_debug_json():     games = mList(['mario', 'contra', 'tetris'])     games.src = 'console'     scores = mDict({'dp': 10, 'pk': 45})     scores.processed = "unprocessed"     test_json = {'games': games, 'scores': scores, 'date': datetime.datetime.now()}     new_object = process_object(test_json)     print(json.dumps(new_object, default=json_debug_handler))   if __name__ == '__main__':     test_debug_json() 

The output of the same is

{"games": {"orig": ["mario", "contra", "tetris"], "attrs": {"src": "console"}}, "scores": {"orig": ["dp", "pk"], "attrs": {"processed": "unprocessed"}}, "date": "2018-01-24T12:59:36.581689"}

It is also possible to override the JSONEncoder, but since it uses nested methods, it would be complex and require techniques discussed in below

Can you patch *just* a nested function with closure, or must the whole outer function be repeated?

Since you want to keep things simple, I would not suggest going that route

Answers 11

Along the lines of FastTurtle's suggestion, but requiring somewhat less code and much deeper monkeying, you can override isinstance itself, globally. This is probably Not A Good Idea, and may well break something. But it does work, in that it produces your required output, and it's quite simple.

First, before json is imported anywhere, monkey-patch the builtins module to replace isinstance with one that lies, just a little bit, and only in a specific context:

_original_isinstance = isinstance  def _isinstance(obj, class_or_tuple):     if '_make_iterencode' in globals():         if not _original_isinstance(class_or_tuple, tuple):             class_or_tuple = (class_or_tuple,)         for custom in mList, mDict:             if _original_isinstance(obj, custom):                 return custom in class_or_tuple     return _original_isinstance(obj, class_or_tuple)  try:     import builtins # Python 3 except ImportError:     import __builtin__ as builtins # Python 2 builtins.isinstance = _isinstance 

Then, create your custom encoder, implementing your custom serialization and forcing the use of _make_iterencode (since the c version won't be affected by the monkeypatching):

class CustomEncoder(json.JSONEncoder):     def iterencode(self, o, _one_shot = False):         return super(CustomEncoder, self).iterencode(o, _one_shot=False)      def default(self, obj):         if isinstance(obj, datetime.datetime):             return obj.isoformat()         elif isinstance(obj,mDict):             return {'orig':dict(obj) , 'attrs': vars(obj)}         elif isinstance(obj,mList):             return {'orig':list(obj), 'attrs': vars(obj)}         else:             return None 

And that's really all there is to it! Output from Python 3 and Python 2 below.

Python 3.6.3 (default, Oct 10 2017, 21:06:48) ... >>> from test import test_debug_json >>> test_debug_json() {"games": {"orig": ["mario", "contra", "tetris"], "attrs": {"src": "console"}}, "scores": {"orig": {"dp": 10, "pk": 45}, "attrs": {"processed": "unprocessed"}}, "date": "2018-01-27T13:56:15.666655"}  Python 2.7.13 (default, May  9 2017, 12:06:13) ... >>> from test import test_debug_json >>> test_debug_json() {"date": "2018-01-27T13:57:04.681664", "games": {"attrs": {"src": "console"}, "orig": ["mario", "contra", "tetris"]}, "scores": {"attrs": {"processed": "unprocessed"}, "orig": {"pk": 45, "dp": 10}}} 

Answers 12

Can we just preprocess the test_json,to make it suitable for your requirement? It's easier to manipulate a python dict than write a useless Encode.

import datetime import json class mDict(dict):     pass  class mList(list):     pass  def prepare(obj):     if  isinstance(obj, datetime.datetime):         return obj.isoformat()     elif isinstance(obj, mDict):         return {'orig':obj , 'attrs': vars(obj)}     elif isinstance(obj, mList):         return {'orig':obj, 'attrs': vars(obj)}     else:         return obj def preprocessor(toJson):     ret ={}     for key, value in toJson.items():         ret[key] = prepare(value)     return ret if __name__ == '__main__':     def test_debug_json():         games = mList(['mario','contra','tetris'])         games.src = 'console'         scores = mDict({'dp':10,'pk':45})         scores.processed = "unprocessed"         test_json = { 'games' : games, 'scores' : scores , 'date': datetime.datetime.now() }         print(json.dumps(preprocessor(test_json)))     test_debug_json() 
Read More