Friday, April 1, 2016

Hibernate OGM Neo4j (5.0 ) Wildfly 10 Error. Provider org.hibernate.ogm.service.impl.OgmIntegrator not a subtype

Leave a Comment

I am getting this error while deployment of ear .

org.jboss.msc.service.StartException in service jboss.persistenceunit."test.ear/server.war#graphdb": java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider org.hibernate.ogm.service.impl.OgmIntegrator not a subtype at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:172) at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:117) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:667) at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1.run(PersistenceUnitServiceImpl.java:182) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at org.jboss.threads.JBossThread.run(JBossThread.java:320)   Caused by: java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider org.hibernate.ogm.service.impl.OgmIntegrator not a subtype at java.util.ServiceLoader.fail(ServiceLoader.java:239) at java.util.ServiceLoader.access$300(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) at java.util.ServiceLoader$1.next(ServiceLoader.java:480) at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.loadJavaServices(ClassLoaderServiceImpl.java:324) at org.hibernate.integrator.internal.IntegratorServiceImpl.<init>(IntegratorServiceImpl.java:40) at org.hibernate.boot.registry.BootstrapServiceRegistryBuilder.build(BootstrapServiceRegistryBuilder.java:213) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.buildBootstrapServiceRegistry(EntityManagerFactoryBuilderImpl.java:288) at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:161) at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:34) at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilder(HibernatePersistenceProvider.java:165) at org.hibernate.jpa.HibernatePersistenceProvider.getEntityManagerFactoryBuilder(HibernatePersistenceProvider.java:160) at org.hibernate.jpa.HibernatePersistenceProvider.createContainerEntityManagerFactory(HibernatePersistenceProvider.java:135) at org.hibernate.ogm.jpa.HibernateOgmPersistence.createContainerEntityManagerFactory(HibernateOgmPersistence.java:96) at org.jboss.as.jpa.service.PersistenceUnitServiceImpl.createContainerEntityManagerFactory(PersistenceUnitServiceImpl.java:318) at org.jboss.as.jpa.service.PersistenceUnitServiceImpl.access$1100(PersistenceUnitServiceImpl.java:67) at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:167) ... 7 more 

And my persistence xml is

<persistence-unit name="graphdb" transaction-type="JTA">     <!-- Use Hibernate OGM provider: configuration will be transparent -->     <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>     <class>com.healthpray.persistence.entities.User</class>     <properties>         <property name="hibernate.transaction.jta.platform"             value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform" />         <property name="hibernate.ogm.datastore.provider" value="neo4j_embedded" />         <property name="hibernate.ogm.neo4j.database_path" value="/home/manju/testdb" />     </properties> </persistence-unit> 

Can any one please whats the issue ? . I tried removing the JTA provider also.

Wildfly - 10.0 JPA -2.1 Java - 8 Hibernate - 5.0.0.Beta1

1 Answers

Answers 1

It was not working because Wildfly trying to load different version of hibernate. So binary conflict.

I disabled JPA subsystem in standalone.xml. Now its working fine.

Read More

max-height and overflow not scrolling on ie9

Leave a Comment

I have a very strange issue on ie9 where a div with a max-height (set with calc() and vh) and overflow auto is not scrolling.

You can see what is happening by clicking on this image (if the GIF does not load here):

enter image description here

My HTML:

<div class="modal">   <div class="modal__title">Modal Title</div>   <div class="modal__body">     <p>When I am too tall, I should scroll on ie9, but I don't.</p>   </div>   <div class="modal__footer">Footer here</div> </div>  

Relevant CSS:

.modal {   min-width: 500px;   max-width: 800px;   border-radius: 4px;   max-height: 65vh;   overflow: hidden;   background-color: white;   position: fixed;   top: 15vh;   left: 50%;   -webkit-transform: translateX(-50%);   -ms-transform: translateX(-50%);   transform: translateX(-50%); }  .modal__body {     max-height: calc(65vh - 120px)); // 120 is the combined height of the header and footer     overflow-y: auto; } 

I don't understand why this is happening, as ie9 support vh, calc() and max-height. Any ideas?

JSFiddle Demo: https://jsfiddle.net/sbgg5bja/3/

1 Answers

Answers 1

It appears to be a repaint issue, when combining position: fixed and transform: translate.

Here is 2 possible fixes:

  • Set the overflow property to scroll. I.e, overflow-y:scroll
  • -ms-filter: "progid:DXImageTransform.Microsoft.Matrix(M11=1, M12=0, M21=0, M22=1, SizingMethod='auto expand')";

Src: How to solve IE9 scrolling repaint issue with fixed-position parent that has -ms-transform:translate?

If neither of these does it, you could drop the transform: translate and use for example display: table to center it.

Read More

Authentication issue in BLE Bluetooth Low Energy device

Leave a Comment

We are making a IOT device with a BLE interface which uses the HM-11 (http://www.seeedstudio.com/wiki/Bluetooth_V4.0_HM-11_BLE_Module) breakout board hosting the chip CC2541 (http://www.ti.com/product/CC2541).

The authentication method is set to 2:Auth with PIN

Clip from the data sheet showing available authentication modes is as follows:

63. Query/Set Module Bond Mode Send Receive Parameter AT+TYPE? OK+Get:[para1] None AT+TYPE[para1] OK+Set:[para1] Para1: 0~2 0:Not need PIN Code 1:Auth not need PIN 2:Auth with PIN 3:Auth and bond Default: 0

For devices less than Android version 5.0 it works out just fine.

However

  1. For devices with Android version 5.0 the pairing dialog appears without diaplyed-pin or pin-entering-field and when the pair button is clicked it fails to pair - complaining with

    Couldn't pair with MyApp because of an incorrect PIN or passkey.

  2. For devices with Android version 5.1 it does not even show the pairing dialog and fails to pair.

Notes: Tried restarting devices, forgetting devices, clearing bonding information from device.

Looking for guidance, advice, help, comments, code.

2 Answers

Answers 1

This is a known issue - quite a few users have reported problems with being unable to enter a passcode with Android 5.0. It doesn't appear to occur across all devices.

Other examples of the issue:

http://android.stackexchange.com/questions/88011/android-5-bluetooth-pairing-dialog-has-no-passkey-form

https://en.discussions.tomtom.com/mysports-connect-apps-389/pairing-issue-on-nexus-5-android-5-948640

Answers 2

Bluetooth depends on both hardware and software to work properly. So if your devices can't speak a common Bluetooth language, they won’t be able to connect.

In general, Bluetooth is backwards compatible: Bluetooth devices supporting the Bluetooth 4.2 standard, announced last year, should still be able to pair with devices using, say, the ancient Bluetooth 2.1, launched back in 2007.

The exceptions are gadgets that use a low-energy version called Bluetooth Smart (or Low Energy), which works on a different protocol than older, or "Classic" Bluetooth devices. LE devices are not backward compatible and won't recognize (or pair with) older devices that support Classic Bluetooth. (For example, an old Sony Ericsson phone sporting Bluetooth 3.0 won't be able to connect to an LE device.) This is probably the cause of your issues, as Android 5 has issues with BLE, and if your device is classic Bluetooth, that won't work. I suggest you check compatibility of the device, and if that seems well, I would fall back on normal bluetooth until you solve the issue.

Hope this helps.

Read More

Downloading a wav file using Horseman and PhantomJS losing data quality

Leave a Comment

I'm using PhantomJS and HorsemanJS to download a wav file from a remote server. However when the file is base64 encoded and written to a new file it loses quality which makes it unusable. The audio is there but its distorted which leads me to think that its an encoding problem. I'm running on Ubuntu 14.04 using node v5

Below is my script any ideas maybe to improve the base64 encoding?

var Horseman = require('node-horseman'); var horseman = new Horseman({cookiesFile:'./cookiejar'}); var fs = require('fs');  horseman.on("urlChanged", function(url){   console.log(url); });  horseman.on('consoleMessage', function( msg ){   console.log(msg); });  horseman   .userAgent("Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36")   .open('https://remoteserver.com/audo.aspx?guid=01439900-5361-4828-ad0e-945b56e9fe51')   .waitForNextPage()   .type('input[name="password"]', process.env.PASS)   .type('input[name="username"]', process.env.UN)   .click("button:contains('Login')")   .waitForNextPage()   .evaluate(function(){     var base64EncodeChars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";      function base64encode(str) {         var out, i, len;         var c1, c2, c3;          len = str.length;         i = 0;         out = "";         while(i < len) {         c1 = str.charCodeAt(i++) & 0xff;         if(i == len)         {             out += base64EncodeChars.charAt(c1 >> 2);             out += base64EncodeChars.charAt((c1 & 0x3) << 4);             out += "==";             break;         }         c2 = str.charCodeAt(i++);         if(i == len)         {             out += base64EncodeChars.charAt(c1 >> 2);             out += base64EncodeChars.charAt(((c1 & 0x3)<< 4) | ((c2 & 0xF0) >> 4));             out += base64EncodeChars.charAt((c2 & 0xF) << 2);             out += "=";             break;         }         c3 = str.charCodeAt(i++);         out += base64EncodeChars.charAt(c1 >> 2);         out += base64EncodeChars.charAt(((c1 & 0x3)<< 4) | ((c2 & 0xF0) >> 4));         out += base64EncodeChars.charAt(((c2 & 0xF) << 2) | ((c3 & 0xC0) >>6));         out += base64EncodeChars.charAt(c3 & 0x3F);         }         return out;     }      var url = $("a:contains('Uncompressed file')").attr("href");      console.log(url);      var out;     $.ajax({       'async' : false,       'url' : url,       'success' : function(data, status, xhr) {         console.log(status);         console.log(xhr.getResponseHeader('Content-Type'));         out = base64encode(data);       }     });     return out;   })   .then(function(out){     fs.writeFile('./mydownloadedfile.txt', out, 'base64', function(){       return horseman.close();     });   }); 

The content-type comes back as audio/wav

Any suggestions welcome!

1 Answers

Answers 1

Why don't you use Buffer for base64 encoding and decoding:

function base64Encode(plainData) {     return new Buffer(plainData).toString('base64'); }  function base64Decode(encodedData) {     return new Buffer(encodedData, 'base64').toString(); } 

in your script:

var out; $.ajax({   'async' : false,   'url' : url,   'success' : function(data, status, xhr) {     console.log(status);     console.log(xhr.getResponseHeader('Content-Type'));     out = base64Encode(data);   } }); return out; })... 
Read More

Rails: Validate uniqueness of multiple columns

Leave a Comment

Is there a rails-way way to validate that an actual record is unique and not just a column? For example, a friendship model / table should not be able to have multiple identical records like:

user_id: 10 | friend_id: 20 user_id: 10 | friend_id: 20 

3 Answers

Answers 1

You can scope a validates_uniqueness_of call as follows.

validates_uniqueness_of :user_id, :scope => :friend_id 

Answers 2

You can use validates to validate uniqueness on one attribute:

validates :user_id, uniqueness: {scope: :friend_id} 

The syntax for the validation on multiple columns is similar, but you should provide an array of fields instead:

validates :attr, uniqueness: {scope: [:attr1, ... , :attrn]} 

However, approaches that are shown above suffer from race conditions, consider the following example:

  1. database table records are supposed to be unique by n fields;

  2. multiple (two or more) concurrent requests, handled by separate processes each (application server, sidekiq or whatever you are using), try to write the same record to the table;

  3. each process in parallel validates if there is a record with the same n fields;

  4. validation for each request is passed and each process creates a record in the table with the same data.

To avoid this kind of behaviour, one should add a unique constraint to the db table. You can set it with add_index for multiple (or one) fields by running the following migration:

class AddUniqueConstraints < ActiveRecord::Migration   def change    add_index :table_name, [:field1, ... , :fieldn], unique: true   end end 

Caveat : even after you've set the unique constraint, two or more concurrent requests will try to write the same data to the db, but instead of creating duplicate records, this will result in the raise of the ActiveRecord::RecordNotUnique exception, which you should handle separately:

begin # writing to the database rescue ActiveRecord::RecordNotUnique => e # handling the case when record already exists end  

Answers 3

You probably do need actual constraints on the db, because validates suffers from race conditions.

validates_uniqueness_of :user_id, :scope => :friend_id 

When you persist a user instance, Rails will validate your model by running a SELECT query to see if any user records already exist with the provided user_id. Assuming the record proves to be valid, Rails will run the INSERT statement to persist the user. This works great if you’re running a single instance of a single process/thread web server.

In case two processes/threads are trying to create a user with the same user_id around the same time, the following situation may arise. Race condition with validates

With unique indexes on the db in place, the above situation will play out as follows. Unique indexes on db

Answer taken from this blog post - http://robots.thoughtbot.com/the-perils-of-uniqueness-validations

Read More

How to handle different reference directions in database and ZF2 application?

Leave a Comment

Zend\Form\Fieldsets and Zend\Form\Collectionss can be nested and provide a very comfortable way to map complex object structures to them, in order to get a comlete object (ready to be saved) from the form input more or less automatically. The Form Collections tutorial provides a very good example.

The case I'm currently having is a bit more complex, since it contains a reference inversion. That means:

I have two entities -- MyA and MyB and while in the database the relationship between them is implemented as FOREIGN KEY from myb.mya_id to mya.id, the application is using an inverted referencing:

MyA has MyB 

Or with some code:

namespace My\DataObject;  class MyA {     /**      * @var integer      */     private $id;     /*      * @var text      */     private $foo;     /**      * @var MyB      */     private $myB; }  namespace My\DataObject;  class MyB {     /**      * @var integer      */     private $id;     /*      * @var text      */     private $bar;     /*     Actually it's even bidirectional, but it's not crucial for this issue.     For this problem it's not important,     wheter the class MyB has a property of type MyA.     We get the issue already,     when we define a property of type MyB in the class MyA.     Since the direction of the reference MyA.myB->MyB differes     from the direction of the reference my_b.my_a.id->my_a.id.     */      /**      * @var MyA      */     // private $myA; } 

My Mapper objects get DataObjects passed as argument: MyAMapper#save(MyA $object) and MyBMapper#save(MyB $object).

namespace My\Mapper; use ... class MyAMapper {     ...     public fuction save(MyA $object)     {         // save the plain MyA propertis a new entry in the my_a table         ...         $myMapperB->save($myA->getMyB());     } }  namespace My\Mapper; use ... class MyBMapper {     ...     public fuction save(MyB $object)     {         // save the plain MyB propertis a new entry in the my_b table         ...     } } 

That means, the MyAMapper#save(...) has evrything needed to save the MyA object to the my_a table. But in the MyBMapper the data for my_b.my_a_id will be missing.

And I also cannot create a fieldset MyAFieldset with a nested fieldset MyBFieldset and then nest the fieldset MyBFieldset into MyAFieldset in order to fill MyA#MyB#MyA (in order to pass the data for my_b.my_a_id to MyBMapper#save(...)):

class MyAFieldset {     $this->add([         'name' => 'my_b',         'type' => 'My\Form\Fieldset\MyBFieldset',         'options' => []     ]); }  class MyBFieldset {     $this->add([         'name' => 'my_a',         'type' => 'My\Form\Fieldset\MyAFieldset',         'options' => []     ]); } 

This would cause a recursive dependency and cannot work.

How to handle a case, when the reference direction on the application level differs from it's direction in the database? How to create though a fieldsets structure, that provides a complete ("ready to be saved") object?


Workaround 1

When the form is processed, a further MyA object can be created and added to the MyB object got from the form:

class MyConrtoller {     ...     public function myAction() {         $this->myForm->bind($this->myA);         $request = $this->getRequest();         $this->myForm->setData($request->getPost());         // here the hack #start#         $this->myB->setMyA($this->myA);         // here the hack #stop#         $this->myAService->saveMyA($this->myA);     } } 

Well, maybe not in the controller, the mapper might be a better place for that:

class MyAMapper {     ...     public function save(MyA $myA)     {         $data = [];         $data['foo'] = [$myA->getFoo()];         // common saving stuff #start#         $action = new Insert('my_a');         $action->values($data);         $sql = new Sql($this->dbAdapter);         $statement = $sql->prepareStatementForSqlObject($action);         $result = $statement->execute();         $newId = $result->getGeneratedValue()         // common saving stuff #stop#         ...         // hack #start#         if(! $myA->getB->getA()) {             $myA->getB->setA(new MyA());             $myA->getB->getA()->setId($newId);         }         // hack #stop#         // and only after all that we can save the MyB         $myB = $this->myBMapper->save($myB);         $myA->setMyB($myB);         ...     } } 

But anyway it's just a hack.

Workaround 2

The MyB class gets a property $myAId. But it's also not a clean way.

Workaround 3

The MyBFieldset gets a MyAFieldsetFake as sub-fieldset. This fieldset class is then just a "shallow" copy of the MyAFieldset, that contains only the ID for the MyA data object:

class MyAFieldset {     ...     public function init()     {         $this->add([             'type' => 'text',             'name' => 'id',             'options' => [...],         ]);         $this->add([             'type' => 'text',             'name' => 'foo',             'options' => [...],         ]);     } } class MyAFieldset {     ...     public function init()     {         $this->add([             'type' => 'text',             'name' => 'id',             'options' => [...],         ]);         $this->add([             'type' => 'text',             'name' => 'bar',             'options' => [...],         ]);         $this->add([             'type' => 'text',             'name' => 'foo',             'type' => 'My\Form\Fieldset\MyAFakeFieldset',             'options' => [...],         ]);     } } class MyAFieldset {     ...     public function init()     {         $this->add([             'type' => 'text',             'name' => 'id',             'options' => [...],         ]);     } } 

But fake objects are a bit dirty as well.

1 Answers

Answers 1

How about creating a new table to handle the mappings on their own. Then you can isolate that complexity away from the objects that take advantage of them.

So, you could have a new object AtoBMappings

namespace My\DataObject;  class MyA {     /**      * @var integer      */     private $id;     /*      * @var text      */     private $foo;     /**      * @var MyAtoB      */     private $myAtoB; }  namespace My\DataObject;  class MyB {     /**      * @var integer      */     private $id;      /**      * @var AtoBMapperID      */     private $myAtoB; }  class MyAtoBMapper {    /**     * @var myB     */    private $myB    /**     * @var myA    **    private $myA } 

Then, instead of hacking your Mapper method, you can simply make an assignment in MyA to MyB creation.

class MyAMapper {     ...     public function save(MyA $myA)     {          $myAtoB = new MyAtoBMapper();         //.... instert new myAtoB into DB           $data = [];         $data['foo'] = [$myA->getFoo()];         $data['myAtoB'] = $myAtoB->getId();         // common saving stuff #start#         $action = new Insert('my_a');         $action->values($data);         $sql = new Sql($this->dbAdapter);         $statement = $sql->prepareStatementForSqlObject($action);         $result = $statement->execute();         $newId = $result->getGeneratedValue();         $myA->setMyAtoB($newAtoB);         $myAtoBMapper->myA = $newId;         // common saving stuff #stop#         // and only after all that we can save the MyB         $myB = $this->myBMapper->save($myB);         $myB->setMyAtoB($newAtoB);         $myAtoBMapper->myB = $myB;         ...     } } 

Do you think this would work, or do you think this is too much of a hack?

Read More

Remote Service deny permission onBind

Leave a Comment

I have a remote service, which external applications can bind to. There are situations where I may wish to decline the binding. According to the documentation,

Return the communication channel to the service. May return null if clients can not bind to the service.

@Override public IBinder onBind(final Intent intent) {     return null; } 

Returning null does indeed not return an IBinder object and therefore prevents the connection, however the calling application does not correctly receive this 'information'.

boolean bound = context.bindService(intent, serviceConnection, flagsHere); 

Whether returning null or not from the Service, this always returns true?

According to the documentation,

Returns - If you have successfully bound to the service, true is returned; false is returned if the connection is not made so you will not receive the service object

I had assumed that returning null from onBind would have caused bindService to return false. Assumptions are never a good idea...

Returning null does however prevent the ServiceConnection from being instantiated invoked, but a consequence of this would be no option to check if the binder is in fact null in onServiceConnected.

So, my question - How does an application 'know' if the binding request has been denied?

Additionally, if I decide on the fly that a request to onRebind (having previously returned true from onUnbind) should be declined, I seem to be unable to override the behaviour to prevent this:

@Override public void onRebind(final Intent intent) {      if (shouldAllowRebind(intent)) {         super.onRebind(intent);     } else {         // ?     } } 

I hope someone can shed some light for me. Thanks in advance.

1 Answers

Answers 1

You probably have to create a workaround. I see two options here:

  • Return a Binder without any functionality if the request should be denied. The client then has to check if the wanted functionality is there. (probably with instanceof).
  • Always return the same Binder, but let every method throw an Exception (e.g. SecurityException) if the call is not permitted. (This was also suggested by @CommonsWare in the comments)

I would personnaly prefer the second approach as it is more flexible. (e.g. allows per-call permit/deny, solves the problem of denying stuff after a rebind, etc.)

Read More